fgilio.com https://fgilio.com A blog about code and useful tips Sun, 06 May 2018 02:51:00 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.8 How to mark something as binary in git https://fgilio.com/how-to-mark-something-as-binary-in-git/ https://fgilio.com/how-to-mark-something-as-binary-in-git/#respond Thu, 05 Apr 2018 16:01:57 +0000 https://fgilio.com/?p=359 Sometimes you may want to override git's decision of wheter a file contains text or binary data. For example, you may have to pull in some external library and want git to track it but not diff it inside. For this, git lets you define what to do with specific files or entire directories. Simply …

The post How to mark something as binary in git appeared first on fgilio.com.

]]>
Sometimes you may want to override git's decision of wheter a file contains text or binary data. For example, you may have to pull in some external library and want git to track it but not diff it inside. For this, git lets you define what to do with specific files or entire directories.

Simply add a `.gitattributes` file in the root of the project, or in the parent directory of the one you want to affect, and use the following syntax:

# For files
file.txt binary
# For entire directories
folder/* binary 

The post How to mark something as binary in git appeared first on fgilio.com.

]]>
https://fgilio.com/how-to-mark-something-as-binary-in-git/feed/ 0
How to use Laravel’s Job chaining https://fgilio.com/how-to-use-laravel-job-chaining/ Fri, 26 Jan 2018 22:29:44 +0000 https://fgilio.com/?p=196 This is a personal one, I simply love how clean and simple this feature is. In the past, I've had to implement job chaining and TBH the end result was pretty gross. It was a system that needed to take a file and pass it through a series of steps, so nothing too complex. But, …

The post How to use Laravel’s Job chaining appeared first on fgilio.com.

]]>
This is a personal one, I simply love how clean and simple this feature is.

In the past, I've had to implement job chaining and TBH the end result was pretty gross. It was a system that needed to take a file and pass it through a series of steps, so nothing too complex. But, by the way the chain worked, every job was conscious about the next one in line (sans the last one, of course). But now, how Laravel does it allows us to completely decouple the jobs from each other. Yeah, I know, it's awesome. It looks like this:
FirstJobToRun::withChain([
    new SecondJobToRun,
    new ThirdJobToRun,
    new ForthJobToRun
])->dispatch();
So yes, it's that simple. But if you are like me, simple is never enough cause of course you have that one case in which you need to pass parameters to your jobs. So, how about that? Well, thankfully it's simple too. Here in this example I've taken the one from the official docs and added a simple parameter:
ProcessPodcast::withChain([
    new OptimizePodcast($podcast),
    new ReleasePodcast($podcast)
])->dispatch($podcast);
And that's all, now go read the official docs and then refactor all those tangled job chains that you've written before. Thank you to all those who worked on this. PD: Wouldn't it be really cool if we could just define common parameters for all jobs in a chain?

The post How to use Laravel’s Job chaining appeared first on fgilio.com.

]]>
Working with discount coupons in Laravel Spark https://fgilio.com/working-with-discount-coupons-in-laravel-spark/ https://fgilio.com/working-with-discount-coupons-in-laravel-spark/#comments Wed, 29 Nov 2017 00:09:50 +0000 https://fgilio.com/?p=100 * EDIT 2018/04/04: This is applicable to both Spark 6 and 5 * EDIT 2018/02/19: This posts was made for Spark 5, but most likely it still perfectly applies to Spark 6. I've read the changelog and upgrade instructions, and those do not mention any changes regarding coupons. I'll update this post once we upgrade …

The post Working with discount coupons in Laravel Spark appeared first on fgilio.com.

]]>
* EDIT 2018/04/04: This is applicable to both Spark 6 and 5

* EDIT 2018/02/19: This posts was made for Spark 5, but most likely it still perfectly applies to Spark 6. I've read the changelog and upgrade instructions, and those do not mention any changes regarding coupons. I'll update this post once we upgrade to Spark 6.

A couple of days ago it was Black Friday and at publica.la we decided to put out a beefy discount for new customers. Thankfully we use Laravel Spark to handle all the SaaS boilerplate needed to bill our clients, but while it does offer full support for discount coupons… IMO the docs are a bit lacking. I even tweeted asking for help but, ironically, all I got was a response from a guy that though I was looking for a discount to actually buy Spark.

* I'm going to make the assumption that you're using Stripe as the gateway, but Braintree should be pretty similar (tell us in the comments if you know about it).

Spark offers two different ways to go about coupons:

1. Individually with every purchase, like you would expect from any kind of system. This is the one that's missing in the docs. Basically, you need to setup your coupons 100% on Stripe side and then use it as a query string on your Spark powered site. So, for example:

https://your.site/?coupon=coupon_code_here

With that in place, Spark will then automatically validate it against Stripe and show an input box in case the user wants to change the code.

2. And the other is called site-wide promotions , which is exactly what we needed 🎉! The way it works is by globally forcing a coupon code in the query string section of the URL (don't worry, it won't mess with any existing values), but still letting the users change it case they want to. Yep, perfect for a Black Friday kind of deal.

All you need to do is use this code:

/*
 * This will probably go in your SparkServiceProvider.php
 */
Spark::promotion('coupon_code_here');

The post Working with discount coupons in Laravel Spark appeared first on fgilio.com.

]]>
https://fgilio.com/working-with-discount-coupons-in-laravel-spark/feed/ 2
So you wanna yarn add a dependency from a git repo? https://fgilio.com/so-you-wanna-yarn-add-a-dependency-from-a-git-repo/ https://fgilio.com/so-you-wanna-yarn-add-a-dependency-from-a-git-repo/#respond Sun, 08 Oct 2017 20:53:14 +0000 https://fgilio.com/?p=79 I’m posting this because, even if really basic stuff, I’ve bumped my head with this wall way too many times before successfully managing to accomplish the task: Use yarn add to pull a dependency while using a git repo as the source. It is, in fact, really simple (though I feel is way to verbose …

The post So you wanna yarn add a dependency from a git repo? appeared first on fgilio.com.

]]>
I’m posting this because, even if really basic stuff, I’ve bumped my head with this wall way too many times before successfully managing to accomplish the task: Use yarn add to pull a dependency while using a git repo as the source.

It is, in fact, really simple (though I feel is way to verbose (? ):

yarn add git+ssh://git@gitlab.com:organization/project

You can even specify use a branch or particular commit by adding it at the end, like this:

# To point to a branch:
yarn add git+ssh://git@gitlab.com:organization/project#dev
# To point to a commit we just use the hash (short one is this case):
yarn add git+ssh://git@gitlab.com:organization/project#c8023772

It’s also listed in the official docs: https://yarnpkg.com/lang/en/docs/cli/add/

So, have you ever had to do this? Do you find it useful?

The post So you wanna yarn add a dependency from a git repo? appeared first on fgilio.com.

]]>
https://fgilio.com/so-you-wanna-yarn-add-a-dependency-from-a-git-repo/feed/ 0
Easily transfer entire local directories to Amazon S3 using s3-parallel-put https://fgilio.com/easily-transfer-entire-local-directories-to-amazon-s3-using-s3-parallel-put/ https://fgilio.com/easily-transfer-entire-local-directories-to-amazon-s3-using-s3-parallel-put/#respond Mon, 04 Sep 2017 05:09:03 +0000 https://fgilio.com/?p=54 A couple of weeks ago I faced the need to upload a large number of files to Amazon S3, we’re talking about lots of nested directories and ~100gb. So, after entering panic-mode for a couple of seconds I turned to our trusty Google and kindly filled its input with “transfer from local to amazon s3” …

The post Easily transfer entire local directories to Amazon S3 using s3-parallel-put appeared first on fgilio.com.

]]>
A couple of weeks ago I faced the need to upload a large number of files to Amazon S3, we’re talking about lots of nested directories and ~100gb. So, after entering panic-mode for a couple of seconds I turned to our trusty Google and kindly filled its input with “transfer from local to amazon s3” (well, I don’t really remember what I searched). I was not feeling really hopeful until I found s3-parallel-put, which seemed to do just what I needed.

Here’s the repo: https://github.com/mishudark/s3-parallel-put

It’s a smart little phyton script that does just that, transfer possibly huge amounts of files to Amazon S3. And, yes it can parallelize the workload making it blazing fast.

It has a couple of dependencies:

# Make sure to have pip updated.
# You may need to use sudo
apt-get update && apt-get -y install python-pip
pip install boto
pip install python-magic

Then, to install it, you just have to download the thing and make it executable:

curl https://raw.githubusercontent.com/mishudark/s3-parallel-put/master/s3-parallel-put > s3-parallel-put
chmod +x ./s3-parallel-put

It needs the AWS credentials as environment variables, which you can easily set:

export AWS_ACCESS_KEY_ID=<blablablablablablabla>
export AWS_SECRET_ACCESS_KEY=<blebleblebleblebleblebleblebeble>

And, finally, you fire it up like this:

# This is considering that the script is in the current directory
./s3-parallel-put --bucket=<enter-destination-bucket-name-here> --bucket_region=us-west-2 --put=update --processes=30 --content-type=guess --log-filename=./s3pp.log /path/to/source/directory

You can do a dry run with --dry-run.

You can speed up the upload using --put=stupid. It won’t check if the object already exists, thus making fewer calls. Use with caution.

You can grant public read access to objects with --grant=public-read.

You may noticed that you can specify a log file, which is really handy because sometimes stuff happens. But, you may also end up with an enormous log file. So here is a quick grep to search for any errors grep "ERROR" s3pp.log.

And that’s all. It has a lot more options that might come handy depending on your needs,so I encourage you to go and check it out.

Thanks for reading, and I hope you find this as useful as I did.

Let me know in the comments if you have any tips.

The post Easily transfer entire local directories to Amazon S3 using s3-parallel-put appeared first on fgilio.com.

]]>
https://fgilio.com/easily-transfer-entire-local-directories-to-amazon-s3-using-s3-parallel-put/feed/ 0
Cherry-picking your way out of trouble https://fgilio.com/cherry-picking-your-way-out-of-trouble/ https://fgilio.com/cherry-picking-your-way-out-of-trouble/#respond Mon, 14 Aug 2017 21:36:47 +0000 https://fgilio.com/?p=42 I find cherry-pick to be one of those great underutilized features of git. And maybe that’s good, because it’s mainly used to apply hot fixes. The way it works is very simple, it just let’s you merge one or more commits from one branch onto another. Awesome, right? Imagine a situation in which you have …

The post Cherry-picking your way out of trouble appeared first on fgilio.com.

]]>
I find cherry-pick to be one of those great underutilized features of git. And maybe that’s good, because it’s mainly used to apply hot fixes.

The way it works is very simple, it just let’s you merge one or more commits from one branch onto another. Awesome, right?

Imagine a situation in which you have two branches, master and payments-refactor.
You’re battling your way out of a thought refactor and suddenly a bug emerges in production, but you find out that you’ve already fixed it during the refactor and have an isolated commit containing the changes. You need to replicate those changes in the master branch and re deploy the app. But copy-pasting, or manually re doing the
changes, is cumbersome and probably even error prone. Well, cherry-pick comes to the rescue. It let’s us replicate that single commit onto the master branch, all while preventing duplicate work and keeping our git history clean.
The only thing we need is the abbreviated commit hash (or all of it), we move to the branch where we want to incorporate the changes and use it like this:

git checkout master
git cherry-pick 3f75a585

That’s all!

I hope you have to use this as little as possible, but find it useful when the time comes.

What do you think of cherry-pick, what do you use it for?

The post Cherry-picking your way out of trouble appeared first on fgilio.com.

]]>
https://fgilio.com/cherry-picking-your-way-out-of-trouble/feed/ 0
How to keep an SSH session alive https://fgilio.com/how-to-keep-an-ssh-session-alive/ https://fgilio.com/how-to-keep-an-ssh-session-alive/#comments Sun, 06 Aug 2017 05:41:11 +0000 https://fgilio.com/?p=29 * This is a Has it ever happened to you… kind of post. Imagine you’re logged in to a server doing some magical stuff. Then you go grab a coffee and when you come back… you’re logged out from the server. Yes, it sucks. You have to SSH in again and cd into the same …

The post How to keep an SSH session alive appeared first on fgilio.com.

]]>
* This is a Has it ever happened to you… kind of post.

Imagine you’re logged in to a server doing some magical stuff. Then you go grab a coffee and when you come back… you’re logged out from the server. Yes, it sucks. You have to SSH in again and cd into the same dir you were before, etc. Ain’t nobody go time for that.

What if I told you that you can keep an SSH session alive? 🚀
All you have to do is edit your ~/.ssh/config file and add the following:

Host *
ServerAliveInterval 60

You can define a specific host, and choose the interval. Most servers with which I have this issue have a rather low timeout, so I’ve chosen to send the keep alive signal every 60 seconds.

And baam, you’ve freed yourself from this annoyance.

rockymonkey555 over stackoverflow.com recommends to also chmod 600 ~/.ssh/config, “because the config file must not be world-readable”.

I hope this is as useful to you as it’s been for me.

The post How to keep an SSH session alive appeared first on fgilio.com.

]]>
https://fgilio.com/how-to-keep-an-ssh-session-alive/feed/ 1
The first one https://fgilio.com/1st-blog-post/ Sun, 06 Aug 2017 00:40:22 +0000 http://fgilio.com/?p=1 This is the first post on this brand new blog, and it has a very descriptive title. I wanted to start a blogging for some time, but always ended up postponing it. Mainly because of a generous dose of impostor syndrome, but also because I haven’t been good at making the time to make the …

The post The first one appeared first on fgilio.com.

]]>
This is the first post on this brand new blog, and it has a very descriptive title.

I wanted to start a blogging for some time, but always ended up postponing it. Mainly because of a generous dose of impostor syndrome, but also because I haven’t been good at making the time to make the blog, in the first place. And, yes, as a developer I wanted my blog to be just perfect.

So, here we are. This is me forcing myself to start sharing, on a sketchy and rushed blog.

Also, I love WordPress and this gives me an excuse to tinker with it a little more ever since I stopped working with it daily.

I honestly have no clue if this is going to be a weekly thing, monthly or whatever. Though it won’t be daily for sure. Let’s just see how it feels to share some thoughts, things I learn or anything else.

Talk to you in the next one!

PD 1: I think most of the next posts will be just some meticulously crafted micro tutorials/reminders of some of the code snippets I have stored like everywhere.

PD 2: As you can see there’s no comments available here, but there will be in the upcoming entries.

PD 3: Saw that line above where I said that “I wanted to start a blogging for some time”, well I’ve this post drafted out for 7 months.

The post The first one appeared first on fgilio.com.

]]>