In HTML you can achieve this with a simple link to an element on the same page, except that the link would be self-referencing — for example:
<h2 id="my-heading">
<a href="#my-heading">My Heading</a>
</h2>
In Markdown, it is equivalent to:
## [My Heading](#my-heading)
This works because Kramdown, the default Markdown renderer for Jekyll, automatically adds id
attributes, using a slugified version of your heading text. So, for example:
### Some text
Conveniently gets rendered as:
<h3 id="some-text">Some text</h3>
Often times I’ve seen a #
symbol, or maybe a link symbol (🔗) either at the beginning or at the end of the heading.
Consider this example I found on the web:
<h3 id="example-1">
Example
<a aria-hidden="true" href="#example-1" hidden>#</a>
</h3>
I think hiding the symbol to assistive tech makes a lot of sense — it is more of a presentational thing, after all. However, decorative things like this don’t have to belong to the HTML — they can be defined in CSS, like:
h2::after,
h3::after {
content: ' #';
}
Even better, you can make it appear only on hover, or when there is a link. Ultimately I settled with this for my blog:
h2:hover > a::after,
h3:hover > a::after {
content: ' #';
}
I’m unsure what’s best practice here, but it just seems very convenient to be able to link to any section of a document in general. Ultimately I don’t think it matters too much, but MDN makes the whole heading a link, and again, the symbol at the end of it is just presentational and should ideally be hidden for accessibility.
It would be cool to be able to keep writing regular headings - i.e. if ## My Heading
would automatically turn into a link. I think this could be done by configuring Kramdown, or maybe via a plugin. If you find a way to enable this, please let me know in the comments below!
In an era where 95% of sites are bloated with cookie banners and behavioral tracking scripts, here’s a refreshing approach to web development.
Nowadays, loading a website on your device of choice can be painful — both for the browser, which has to parse and render all that stuff, and for you, the user, which has to wait for it (and pay for that data plan).
The issue is only exacerbated by the fact that mobile devices can be slow, the network can be slow, and there’s simply too much cruft to download.
There is a better way.
What if you could make your entire website fit into a single page?
A lot of sites are built as a single page (can we have more of those?)— and no, I don’t mean a SPA. I’m talking about something radically different here. Essentially, the challenge boils down to this:
Can I reduce every page down to a single HTTP request?
First of all, why? You may ask. Well, many reasons. I did this to my blog, simonewebdesign.it. I mostly did it just to prove it possible, but it turned out to be a fun challenge that kept me busy for months and resulted in a much faster, more maintainable website, with a very small carbon footprint.
But the real question is, how?
This is essentially how I’ve done it. I’ve inlined everything that could’ve possibly been inlined. Keep reading if you’re curious and want to know how I achieved this.
This is where the journey started. I refactored some CSS, got rid of a few superfluous HTML tags, and the stylesheet got so small that I thought, why not just inline it? So I looked into that.
I thought I should be able to inline the compiled style.css
’s contents into a <style>
tag, somehow. This turned out to be a challenge. I couldn’t find a tool that did exactly this, so I wrote my own — or, to be more specific, I sent a pull request to inline-scripts, which seemed like the closest thing I could find. At first, all it was doing was inlining <script>
tags, so I only had to do the same, but for CSS — a simpler job, figuratively speaking.
Anyway, with some clever Ruby scripting I managed to minify all the HTML, CSS and JS in one go. I would first inline the CSS “on the spot” (i.e. without creating a new file):
Dir["public/**/*.html"].each do |file|
puts "Processing #{file}..."
system "node_modules/.bin/inline-stylesheets #{file} #{file}"
end
I would then run html-minifier as such:
html-minifier --file-ext html --case-sensitive \
--collapse-boolean-attributes --collapse-whitespace \
--minify-css true --minify-js true \
--remove-attribute-quotes --remove-comments \
--remove-empty-attributes --remove-empty-elements \
--remove-optional-tags --remove-redundant-attributes \
--remove-script-type-attributes \
--remove-style-link-type-attributes \
--remove-tag-whitespace --sort-attributes \
--sort-class-name --trim-custom-fragments \
--use-short-doctype
And there you have it — a highly optimized, one-liner HTML file with inline CSS and JS — for every page.
The manifest.json is usually pretty small, so it makes sense to inline it.
How did I do this? It was actually pretty simple: I made it into a data URI of type data:application/manifest+json
, which is then loaded as a <link rel="manifest">
, just as usual. Here it is, in its full one-line glory:
<link href='data:application/manifest+json,{"name":"Simone Web Design","short_name":"SimoneDesign","theme_color":"%23555","background_color":"%23f6f6f6","display":"minimal-ui","description":"A tech blog"}' rel=manifest>
The only catch was that I had to encodeURIComponent.
I got rid of client-side analytics. Cloudflare already gives me enough stats, such as the number of requests, unique visits and page views, grouped by visiting country. I don’t really need any more than that.
As an interesting side note, when I check the analytics on Cloudflare, it also shows a Carbon Impact Report, which claims to have saved me 836 grams of carbon (in 2020 vs average data centers) — this is equivalent to turning off one lightbulb for 20 hours, apparently. It sounds an awful lot like greenwashing, if you ask me, but to be fair they do seem to have put effort on reducing the Internet’s environmental impact.
I use Puma. It is a fast server that provides parallelism out of the box. This is what my Procfile looks like:
web: bundle exec puma --threads 8:32 --workers 3 -p $PORT
Basically what this means is that, at any given time, there are 8 threads ready to serve you a request, over 3 separate clusters.
This may be a little overkill, since I also use Cloudflare as a caching layer on top of this server, which is load balanced and globally distributed. I also don’t usually get much traffic, so this definitely achieves the goal of speed.
I recently switched to a new hosting provider, Fly.io. I had to do that since Heroku, my old provider, unfortunately discontinued their free plan. Sad news, I’ve used it for 10 years, but it was time to move on.
Fly has a pretty sound infrastructure with modern features:
Last but not the least, I was able to remove a CNAME record and use an A record instead. This resulted in one less server roundtrip.
These are loosely related to the “one request” thing, but still worth mentioning.
Lots of formats have actually been superseded by more efficient ones: PNG to WebP, GIF to MP4, JPEG to AVIF… the list goes on — these are just the ones I’m aware of. I don’t think these should be inlined, but new formats are definitely worth the effort, since they’re much more performant.
I went from PNG to SVG for the favicon, which I blatantly stole from Peter Selinger, the guy behind Potrace (it was public domain, technically). What I did on my part was optimizing it even further, using Jake Archibald’s wonderful SVGOMG, powered by SVGO.
As for inlining it, I had to serialize it into a data URI using mini-svg-data-uri. I even ended up making a CLI out of it — it’s something I had to do anyway, and contributing back was the least I could have done.
I got rid of the custom Google font I was using and went for system fonts. This is what I have now:
html {
font-family: "PT Serif", Georgia, Times, "Times New Roman", serif;
}
I don’t actually provide PT Serif, however. If your machine happens to have that, great — if not, it’ll fall back to the next one. I might reconsider this choice in the future, but for now, this is good enough.
I waited until the end to say this, because you probably wouldn’t have believed me, but this site doesn’t have any JavaScript, the only exception being made for the ServiceWorker registration:
<script>
navigator.serviceWorker.register("/sw.js")
</script>
The ServiceWorker is actually a separate JS file, because I couldn’t find a way to inline that (if you do know of a way, please let me know). But, other than that (and Disqus, which I’m planning to remove soon), I don’t need JS at all. ¯\_(ツ)_/¯
I hope you liked this article at least as much as I enjoyed writing it. I hope it tickled your curiosity, to the very least, and that - by shedding a light on the importance of performance - I’ve inspired you to take action and improve your own site.
]]>Before we start, a quick word of warning: it’s generally considered good practice to install your npm dependencies in the local node_modules
folder, whenever possible. This means simply running npm install
without the --global
(-g
) flag. However, sometimes this is not an option, for instance if you have a library or tool that expects a binary to be already present in the system, like in my case.
Essentially, I had one problem: a Ruby gem that needed a Node.js package to be installed globally.
I had a Jekyll blog use Pug (Jade) for templating. Making Pug work locally was very easy using Jekyll-Pug, a Jekyll plugin that enables Pug templates. However, when deploying on Heroku, the build would fail because of the missing Pug library.
The Jekyll-Pug README is pretty clear:
Note: you must have pug installed. To install it, simply enter the terminal command,
npm install pug -g
.
Two issues here:
npm
;Point #1 was pretty straightforward: I simply needed to add the Heroku Buildpack for Node.js alongside the Ruby one, essentially using two buildpacks instead of one. You can do this by running:
heroku buildpacks:add --index 1 heroku/nodejs
This will insert the Node.js buildpack before Ruby, so it will be executed first.
Point #2 was about installing Pug globally. The way I went to achieve this was by using package.json’s scripts. This is what my package.json looked like:
{
"scripts": {
"build": "npm install pug --global"
}
}
The npm install pug --global
command would run on Heroku when pushing and, thanks to the multi-buildpack behaviour, all Node.js-related binaries would be available in subsequent buildpacks as well.
So in my specific case, this meant that Jekyll could find the global Pug binary and compile the blog successfully. Problem solved!
Whilst global dependencies are to avoid whenever possible, Heroku lets us run arbitrary commands and generate any build artifacts needed for our apps to function correctly. Buildpacks are Heroku’s way of handling dependencies and compile code. They have a list of official buildpacks for us to use, for free — and if you ever need to install a global dependency (or run any arbitrary command in Node.js, for that matter), you can do so using scripts in your package.json.
]]>It’s actually pretty simple: I’ll show you how.
I like the idea of having two folders, each containing many git repositories: I’ll call them Work
and Projects
, but you’re naturally free to name them the way you prefer.
The first step is to create a file in your home directory, named .gitconfig
. You probably have it already, and that’s fine. Just open it and paste this:
[includeIf "gitdir:~/Work/"]
path = ~/Work/.gitconfig
[includeIf "gitdir:~/Projects/"]
path = ~/Projects/.gitconfig
It’s pretty self-explanatory, right? We’re essentially saying:
~/Work/
, include the config located at path ~/Work/.gitconfig
;~/Projects/
, include the config located at path ~/Projects/.gitconfig
.Note you don’t even need to create these files — just use git config
to write in them. For example, to use your work email on all work-related repos, you might do:
git config --file ~/Work/.gitconfig user.email john@example.work
This is great, because we can now have completely separate configurations, each living in their own separate folder, and the right configuration will be applied depending on the location. Awesome!
There are a few little caveats to be aware of, just in case you run into issues. If you do, you may want to read the Includes section in the official docs — for example, you know the trailing slash in gitdir:~/Work/
? You’d think it wouldn’t matter, but it does: if the path ends with /
, it matches Work
and everything inside, recursively. Also, don’t add a space between gitdir:
and the path, or it won’t work.
You’ll likely want to avoid repeating yourself and share the common bits of configuration, such as git aliases, if you have any.
If that’s the case, just keep those in the global config. You can do so by using the --global
flag, for example:
git config --global alias.st status
I hope you found this useful. If you run into trouble, feel free to leave me a comment below and I’ll try to help. Remember, git is your friend.
]]>Rust is a statically typed language and, due to the memory safety guarantees we are given, all values of some type must have a known, fixed size at compile time, therefore we are not allowed to create a collection of multiple types. However, dynamically sized types also exist, and in this article I’ll show how to use them.
Say we have a HashMap
and we want to add more than one value type to it.
For example:
use std::collections::HashMap;
fn main() {
let mut map = HashMap::new();
map.insert("a", "1");
map.insert("b", "2");
for (key, value) in &map {
println!("{}: {}", key, value);
}
}
This prints:
a: 1
b: 2
In the example above, the type of map
is HashMap<&str, &str>
. In other words, both keys and values are of type &str
.
What if we want the values to be of type &str
and, say, i32
?
This won’t work:
use std::collections::HashMap;
fn main() {
let mut map = HashMap::new();
map.insert("a", "1");
map.insert("b", 2);
for (key, value) in &map {
println!("{}: {}", key, value);
}
}
If we try it, we get this compile time error:
error[E0308]: mismatched types
--> src/main.rs
map.insert("b", 2);
^ expected `&str`, found integer
So how do we insert multiple value types in a HashMap
? We have several options, each of them with its own trade-offs.
enum
We can define our own enum
to model our value type, and insert that into the hashmap:
use std::collections::HashMap;
#[derive(Debug)]
enum Value {
Str(&'static str),
Int(i32),
}
fn main() {
let mut map = HashMap::new();
map.insert("a", Value::Str("1"));
map.insert("b", Value::Int(2));
for (key, value) in &map {
println!("{}: {:?}", key, value);
}
}
This prints:
a: Str("1")
b: Int(2)
This is similar to a union type. By inserting values of type Value::*
, we are effectively saying that the map can accept types that are either string, integer, or any other composite type we wish to add.
Box
We can wrap our types in the Box
struct:
use std::collections::HashMap;
fn main() {
let mut map = HashMap::new();
map.insert("a", Box::new("1"));
map.insert("b", Box::new(2));
for (key, value) in &map {
println!("{}: {}", key, value);
}
}
This doesn’t compile right away. If we try to run this, we get a “mismatched types” error:
error[E0308]: mismatched types
--> src/main.rs
map.insert("b", Box::new(2));
^ expected `&str`, found integer
Luckily we can fix this by explicitly declaring the type of our map:
let mut map: HashMap<&str, Box<dyn Display + 'static>> = HashMap::new();
This works because we are actually storing instances of Box
, not primitive types; dyn Display
means the type of the trait object Display
. In this case, Display
happens to be a common trait between &str
and i32
.
use std::collections::HashMap;
use std::fmt::Display;
fn main() {
let mut map: HashMap<&str, Box<dyn Display + 'static>> = HashMap::new();
map.insert("a", Box::new("1".to_string()));
map.insert("b", Box::new(2));
for (key, value) in &map {
println!("{}: {}", key, value);
}
}
You may wonder what would happen if we were to use the type dyn Display
without the Box
wrapper. If we try that, we’d get this nasty error:
error[E0277]: the size for values of type `(dyn std::fmt::Display + 'static)` cannot be known at compilation time
--> src/main.rs
let mut map: HashMap<&str, (dyn Display + 'static)> = HashMap::new();
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time
= help: the trait `std::marker::Sized` is not implemented for `(dyn std::fmt::Display + 'static)`
= note: to learn more, visit <https://doc.rust-lang.org/book/ch19-04-advanced-types.html#dynamically-sized-types-and-the-sized-trait>
= note: required by `std::collections::HashMap
This error may be confusing at first, but it actually makes sense. The Rust Programming Language book explains this very well in the Advanced Types chapter:
“Rust needs to know how much memory to allocate for any value of a particular type, and all values of a type must use the same amount of memory.”
The Box<T>
type is a pointer type. It lets us allocate data on the heap rather than the stack, and keeps a reference to the data in the stack in the form of a pointer, which is of fixed size.
Here we’re not actually using a HashMap
with separate types, but rather two maps, each with its own type. It’s a bit more verbose and perhaps not the solution you’re looking for, but it’s worth keeping in mind that this works too:
use std::collections::HashMap;
fn main() {
let mut strings_map = HashMap::new();
let mut integers_map = HashMap::new();
strings_map.insert("a", "1");
integers_map.insert("b", 2);
for (key, value) in &strings_map {
println!("{}: {}", key, value);
}
for (key, value) in &integers_map {
println!("{}: {}", key, value);
}
}
It feels much simpler! And the output is naturally the same:
a: 1
b: 2
Rust is very strict when it comes to polymorphic types. As you’ve seen, there are ways to achieve it, but they don’t feel as straightforward as with other dynamic languages such as Ruby or Python. Sometimes though it’s useful to make one step back and look at the actual problem we’re trying to solve. Once I did that, I realized that I didn’t necessarily have to limit myself to a single data structure, so I went for the last option.
I’m still a beginner with Rust, so I might have missed on a better solution. Trait Objects could be one: I’ve experimented with them, but they weren’t quite was I was looking for. If you have any suggestions or know of other possible solutions, feel free to comment below!
Update: @alilleybrinker on Twitter pointed out two caveats to be aware of. One is about the meaning of the 'static
bound: when used on a generic type, any references inside the type must live as long as 'static
. However, by adding 'static
we are also effectively saying that the values inside the Box
won’t contain references. The other caveat is that, when using dyn Display
, the original types are erased, so the available methods are only those known from the Display
trait.
You have tried git pull
, but you’re getting this error:
error: Untracked working tree file * would be overwritten by merge.
fatal: read-tree failed
You need the changes, but obviously you don’t want to overwrite or lose any files. Don’t worry, the fix is actually straightforward!
The reason is probably because you didn’t clone the repository. In my case, I already had some local files, so instead of running git clone
, here’s what I did:
git init
git remote add origin git@github.com:<username>/<reponame>.git
If you try to git pull origin <branch-name>
, you might get the “untracked working tree” error.
If you have already tried pulling from the remote and it didn’t work, here’s the fix:
git branch --track <branch-name> origin/<branch-name>
For example, if your branch is named main
:
git branch --track main origin/main
What this does is simply tell Git that these two branches, main
and origin/main
, are related to each other, and that it should keep track of the changes between them. Turns out it also fixes the error, since Git can now see that nothing would be overwritten.
Yes! After running the command above, git status
will indeed reveal the differences between the two repositories: your untracked files (i.e. extra files that you only have on your PC) will still be there, and some other files may have been automatically staged for deletion: these are files that are present in the remote repo, but you don’t have locally.
At this point you’ll want to double-check that everything is the way it should be. You may also want to run:
git reset
To get a clean state. Don’t worry, this won’t delete anything at all, it will simply unstage any modification that was applied automatically by Git. You can stage back the changes you care about using git add .
— once you are happy, you can finally make a commit and run:
git push
Note there’s no need to specify the origin and the branch name anymore, since the two branches (the local and the remote) are now tracked.
Hopefully this article helped you fix your issue; either way, feel free to ask for help by leaving a comment below.
Happy hacking!
]]>osascript -e 'tell app "System Events" to tell appearance preferences to set dark mode to not dark mode'
Try it and it will switch the mode immediately. No need to restart or install anything.
It’s AppleScript. dark mode
is a boolean value in the user defaults system. not dark mode
is the opposite of that value. So, for example, if the value is true
, it’s like saying not true
(i.e. false
), effectively acting like a light switch.
Enjoy the dark!
]]>Especially with the advent of React, the tendency is to write a custom menu component that uses JavaScript to open/close itself, perhaps by using an invisible overlay to detect clicks outside the menu and close it accordingly. This works fine in practice, however it doesn't have to be so complicated. If you need a simple dropdown menu that:
Then look no further. It's much simpler than you think!
This is done in pure HTML and CSS; the JavaScript is there just to add functionality. Source code below.
That's the trick: we hide the menu in CSS initially, then show it when the button gets focused and while we're clicking on the menu itself. This is necessary so that the click actually gets registered. That's it! No JS trickery involved.
You can attach event listeners to the menu items, e.g. using onclick
or document.addEventListener
and they'll work as usual. You may also just use <a>
tags instead of buttons, depending on your use case.
Naturally the menu can be opened only by elements that can receive focus, such as buttons and anchors. So what about other non-interactive elements? Can we make them focusable too? The answer is yes!
We want to display a context menu when clicking on the following image:
The trick here was to add tabindex
. This makes the element focusable, so that it can open the menu on click. Note that if the clickable element is a <button>
or other interactive content (i.e. any focusable element), then you don't even need this!
I've used a <figure>
, but you can use any element you like. Just add tabindex="-1"
to make it focusable, if it isn't already. You can place the menu anywhere you want in the HTML, as long as you're able to target it with a CSS selector. Just try not to put a button in a button as that's invalid HTML, although technically it will still work.
You'll need JavaScript, but it's entirely up to you whether you want to do this. Alternatively you could add position: absolute
to the menu and just make it appear below (or next to) the element you clicked — no need for JS in this case! Anyway, this did the trick for me:
If that's the case, you'll probably be better off using the old checkbox hack.
Accessibility isn't the main focus of this article, but an important topic nonetheless. Menu items should be navigatable with a keyboard: this requires JS, but it's not hard to achieve. The W3C has done a lot of work around accessibility and there's plenty of examples you can refer to on their site: for instance, I think the menu button example is particularly relevant.
It may not work in some very old browsers, so make sure to test it in the browsers you need to support. This MDN page has some info about what happens to the focus of a button when being clicked/tapped on different platforms. I did some tests myself and it seems to work well everywhere, including IE and mobile browsers.
Update: this blog post received a lot of attention and a few folks reached out to me about an issue, specifically with Safari and Firefox, on both iOS and macOS: the button won't focus. No worries though, it only affects buttons; other tags will work just fine. You may consider using <span tabindex=0>
— the semantic meaning is lost entirely here, so if you really want to use a button, you can always focus it programmatically via JS, but only on Apple devices — for example:
Another issue you may find specific to Apple is that the menu won't close when tapping outside of it. There's an easy fix: simply add tabindex="-1"
to the container or the body tag.
And that's it! I hope you found this useful. If you spot any issues, please do let me know!
]]>I didn’t know anything about OOP, Design Patterns, Single Responsibility… all I knew was some PHP, Visual Basic, and database design stuff. That was it.
So I went to a book store and I bought this book about Object-Oriented Programming in Java 6. It was a massive book, probably around 1000 pages of code and programming best practices, and I read like 80% of it. Some parts were too advanced for me, but I learned a lot.
I used to like Java. I thought, “so this is what real programming looks like, with classes and inheritance. That’s the right way”.
I actually believed this for a while, until that day…
One day I went to this website, projecteuler.net, which is basically a way to prove your skills by solving difficult programming challenges, and learn in the process.
It was years ago, but I remember I solved the first couple exercises pretty easily. The third one was a bit harder. Here’s the original text:
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
Source: https://projecteuler.net/problem=4
I spent a few hours on it before coming up with this:
import static java.lang.System.out;
import java.util.ArrayList;
import java.util.List;
public class Euler4
{
static final int MIN = 100;
static final int MAX = 999;
public static List<Integer> getPalindromes(int min, int max)
{
List<Integer> palindromes = new ArrayList<>();
for (int i=max; i >= min; i--)
{
for (int j=max; j >= min; j--)
{
int product = i * j;
if (isPalindrome(new Integer(product).toString()))
palindromes.add(product);
}
}
return palindromes;
}
public static boolean isPalindrome(String str)
{
if (str.length() < 0) return false;
if (str.length() <= 1) return true;
char firstChar = str.charAt(0);
char lastChar = str.charAt(str.length()-1);
if (firstChar == lastChar) {
return isPalindrome(str.substring(1, str.length()-1));
}
return false;
}
public static int getHighestNumber(List<Integer> numbers)
{
int highestNumber = -1;
for (int number : numbers)
if (number > highestNumber)
highestNumber = number;
return highestNumber;
}
public static void main(String[] args)
{
List<Integer> palindromes = getPalindromes(MIN, MAX);
out.println(getHighestNumber(palindromes));
}
}
It’s 46 lines of code, without counting blank lines. Not too bad, right?
Ok, don’t be mean. I know that’s probably shitty code, but it was my own solution and I was quite proud of it.
Now, when you finish a challenge successfully, you’re given access to the forum, where other programmers post their own solutions in many different languages.
That’s where I first discovered Ruby.
I was reading the thread about the problem I just solved, when I stumbled across this Ruby solution:
m = 0
901.upto(999) {|a|
901.upto(999){|b|
s = (a*b).to_s
m = [m, a*b].max if s == s.reverse
}
}
puts m
And I was like, “wow, seriously? Only 8 lines of code?”.
I couldn’t believe my eyes. I was staring at something marvelous; some beauty that I never came across before.
Ruby is an object-oriented programming language that focuses on expressiveness and readability.
It was love at first sight. I started reading about this amazing language, about the fact that everything in Ruby is an object, even integers, and that you can write code like 3.times { print "Hello" }
to simply print “Hello” 3 times. It was like reading English, and I felt truly amazed, humbled, and inspired.
Anyway, that’s just part of my story about becoming a better programmer. I’m not sure what the point is, I just felt like writing it down. But if, like me, you’re one of those people that need some ‘takeaway’ from a story, I guess it should be this:
]]>Just don't stop learning, ever.
Keep on learning and practicing, and you too will discover beautiful things.
debug
, info
, warn
or error
. For example:
Logger.info("something happened")
Turns into:
12:34:56.789 [info] something happened
Very nice. However, there are cases where you may want to, say, change some data structure, like update a map or a list, and then log the transition, without breaking the pipe. Example:
def my_function do
list = [1, 2, 3]
list
|> Logger.debug("before insert: #{inspect list}")
|> Enum.into([0])
|> Logger.debug("after insert: #{inspect list}")
end
This doesn’t work for many reasons. First, we can’t refer to list
that way. If we do, we will always be logging [1, 2, 3]
, because Elixir’s data structures are immutable.
Second, Logger.*
functions return the :ok
atom, which means you can’t use them in a pipe—unless that is what you want to return.
The solution to both issues is actually pretty straightforward: use a lambda! A lambda is just an anonymous function. We can define it and call it right away. So the code above becomes:
def my_function do
[1, 2, 3]
|> (fn list ->
Logger.debug("before insert: #{inspect list}")
list
end).()
|> Enum.into([0])
|> (fn list ->
Logger.debug("after insert: #{inspect list}")
list
end).()
end
If we call this function, we get:
12:34:56.789 [debug] before insert: [1, 2, 3]
12:34:56.823 [debug] after insert: [0, 1, 2, 3]
Great, exactly what we want! Except the syntax is horrible. But fear not, we can improve on it. How about we make a wrapper?
defmodule PipeableLogger do
require Logger
def debug(data, msg) do
Logger.debug(msg)
data
end
# def warn, do: ...
# def error, do: ...
# def info, do: ...
end
Let’s rewrite our function once again:
def my_function do
[1, 2, 3]
|> (&PipeableLogger.debug(&1, "before insert: #{inspect &1}")).()
|> Enum.into([0])
|> (&PipeableLogger.debug(&1, "after insert: #{inspect &1}")).()
end
Still not pretty though, as we still needed to wrap the function in a lambda. If we want to build a proper Logger
wrapper, there are at least two different cases we may want to handle:
Here’s the improved version of PipeableLogger
:
defmodule PipeableLogger do
require Logger
def debug(data, msg \\ "", metadata \\ [])
def debug(data, msg, metadata) when msg == "", do: Logger.debug(data, metadata)
def debug(data, msg, metadata) do
Logger.debug(msg, metadata)
data
end
# def warn, do: ...
# def error, do: ...
# def info, do: ...
end
Let’s use it:
def my_function do
[1, 2, 3]
|> PipeableLogger.debug("before insert")
|> Enum.into([0])
|> PipeableLogger.debug("after insert")
end
Much, much simpler! The only problem now is, we’re logging just a message. What if we want to log the data? It’s a lambda all over again.
Here’s the final version I came up with:
defmodule PipeableLogger do
require Logger
def debug(data, msg \\ "", metadata \\ [])
def debug(data, msg, metadata) when msg == "", do: Logger.debug(data, metadata)
def debug(data, msg, metadata) when is_binary(data) do
Logger.debug(msg <> data, metadata)
data
end
def debug(data, msg, metadata) do
Logger.debug(msg <> inspect(data), metadata)
data
end
# def warn, do: ...
# def error, do: ...
# def info, do: ...
end
The assumption is that we always want to concatenate the data with the message, which is fair enough I think. Let’s see it in action:
def my_function do
[1, 2, 3]
|> PipeableLogger.debug("before insert: ")
|> Enum.into([0])
|> PipeableLogger.debug("after insert: ")
end
iex> my_function()
12:34:56.789 [debug] before insert: [1, 2, 3]
12:34:56.789 [debug] after insert: [0, 1, 2, 3]
[0, 1, 2, 3]
Now we can log the data with a message, all in a pipe and without a lambda! Nice!
Summing up, I’m not convinced a Logger
wrapper is the right way. This kinda goes against the blog post, but to be fair I think Elixir people tend to use pipes way too much (I’m guilty as well). So I wouldn’t probably wrap Logger
in any project.
It’s also worth noting that Logger
supports the concept of metadata, which basically means you can already attach any data you want. For example, if you put this in your config.exs
:
config :logger, :console,
metadata: [:my_list]
You can then call Logger
like this:
iex(1)> require Logger
Logger
iex(2)> Logger.info "Work done", my_list: inspect [1, 2, 3]
12:34:56.789 my_list=[1, 2, 3] [info] Work done
:ok
Point is, you don’t need a wrapper if all you want is concatenate some data in the log message. You do need a wrapper though (or a lambda) if you want to use Logger
in a pipe.
So how about this instead?
def my_function do
list = [1, 2, 3]
Logger.debug("before insert: #{inspect list}")
new_list = Enum.into(list, [0])
Logger.debug("after insert: #{inspect new_list}")
end
Simple is better. It’s fine to break that pipe every once in a while!
]]>Given an array of arbitrarily nested objects, return a flat array with all the objects marked as “good”.
The definition above is quite generic, so I’ll provide examples to show exactly what I mean.
The array in JavaScript looks like this:
var items = [{
id: 1,
good: true
}, {
id: 2,
children: [{
id: 3,
good: true
}, {
id: 4,
good: true
}, {
id: 5,
children: [{
id: 6,
good: true
}
...
]
}, {
id: 9,
children: [...]
}, ...]
}, ...]
We want the IDs of the good ones.
You might have noticed not all objects are “good”. Number 2 for example is not good. So the result in this case should be:
[1, 3, 4, 6]
The only thing to notice here is that you know it’s not good because
it’s not marked as such. In other words, when some object is “bad”,
there’s no good: false
nor bad: true
that tells you that.
So how do we solve this challenge?
Since there’s an arbitrary nesting depth, we can once again leverage the power and simplicity of recursion.
I’ve created the function goodOnes(items)
that takes the input and
returns what we expect. I’m also using Ramda.js, just because I wanted a clean functional solution and I didn’t want to mess around
object mutation.
Here it is:
function goodOnes(items) {
return R.reduce(theGoodOne, [], items);
function theGoodOne(acc, item) {
if (item.good) {
return acc.concat(item.id);
} else if (item.children && item.children.length > 0) {
return R.reduce(theGoodOne, acc, item.children);
}
return acc;
}
}
As a side note, you don’t really have to use Ramda.js.
Array.prototype.reduce
does the same, although in a less elegant way.
What this function does is basically just collecting values. The
starting point is an empty array, you can see that as the second
argument in the first line. theGoodOne
is another function (a closure,
to be specific) that is implicitly taking two arguments: acc
(the
accumulator, the empty array) and item
(the current item in the loop).
If the item is good, we return a new array with the item’s ID. Otherwise, we return the accumulator. However, if the item happens to have some children, we start over doing the same thing (i.e. looping over its children), also keeping track of the accumulator we already have this time. It might be still empty, but we don’t care yet. We just return it at the very end.
Now, you might have noticed a bug: what happens if the item is good, but also has children? … Yes, that item will be discarded! I did it on purpose by the way. When I made this function, the original array of items never had any good item with children. Only good items, or items with children. The algorithm is reflecting this, so it’s technically not a bug.
If you’re curious about what’s the original intent behind this function, it is to collect values from an infinitely nestable architecture of UI components. There are text components, number components, datepickers etc… those are all part of a category called fields. There are also wrappers, that could be for example a fieldset or a grid. Wrappers can contain fields, but also other wrappers.
So what if you have such data structure with so many components and all you need is just an array of fields? Simple, just reduce recursively on it! ;)
More in general, you can use the recursive reduce whenever you have a nested data structure (such as an array of arrays) and you want to get something out of it.
This recursive solution follows the same logic as the JavaScript one, but somehow it feels superior. It could probably be rewritten in a more elegant way I guess, but I’m not very experienced with Clojure so here we go:
(defn good-one [acc item]
(if (item :good)
(conj acc (item :id))
(if (seq (item :children))
(reduce good-one acc (item :children))
acc)))
(defn good-ones [collection]
(reduce good-one [] collection))
Everything is on GitHub if you want to fiddle around – just follow the instructions to get the demos up and running on your computer.
Create a
blend
function that, given two lists of the same length, returns a new list with each element alternated. E.g.:blend [1, 2, 3] [4, 5, 6] => [1, 4, 2, 5, 3, 6]
As with all challenges, it can be solved in many different ways. However this particular one is easily solvable with functional programming techniques such as recursion.
You can try implementing it on your own first or just look straight at the solutions below.
The one below is probably the most straightforward solution:
blend : List a -> List a -> List a
blend xs ys =
case xs of
x :: xs' -> x :: blend ys xs'
_ -> []
Notice how I exchanged the arguments in the recursion call. That did the trick!
Let’s try it in the REPL – I added slashes so you can copy-paste the function:
$ elm-repl
> blend xs ys = \
case xs of \
x :: xs' -> x :: blend ys xs' \
_ -> []
<function> : List a -> List a -> List a
> blend [0,0,0] [1,1,1]
[0,1,0,1,0,1] : List number
We can achieve the same in Swift by using an extension that splits up an Array into head and tail (credits to Chris Eidhof):
extension Array {
var match : (head: T, tail: [T])? {
return (count > 0) ? (self[0],Array(self[1..<count])) : nil
}
}
And here’s the solution:
func blend(firstArray: Array<AnyObject>, secondArray: Array<AnyObject>) -> Array<AnyObject> {
if let (head, tail) = firstArray.match {
return [head] + blend(secondArray, secondArray: tail)
} else {
return []
}
}
If you know of a better way, please let me know! Also feel free to leave a comment with any other alternative solution, even in other languages.
]]>Let’s say we want to get the AST of this file:
# lib/hello.ex
defmodule Hello do
def hi(name) do
IO.puts "Hello " <> name
end
end
We can do it right away from iex
:
$ iex
Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:8:8] [async-threads:10] [kernel-poll:false]
Interactive Elixir (1.0.5) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> {:ok, ast} = Code.string_to_quoted(File.read!("lib/hello.ex"))
{:ok,
{:defmodule, [line: 1],
[{:__aliases__, [counter: 0, line: 1], [:Hello]},
[do: {:def, [line: 3],
[{:hi, [line: 3], [{:name, [line: 3], nil}]},
[do: {{:., [line: 4],
[{:__aliases__, [counter: 0, line: 4], [:IO]}, :puts]}, [line: 4],
[{:<>, [line: 4], ["Hello ", {:name, [line: 4], nil}]}]}]]}]]}}
In our case, the ast
variable will contain the full AST of the source code.
In case you want to get the AST of a single line, it’s even simpler:
iex(1)> name = "John"
"John"
iex(2)> IO.puts "Hello " <> name
Hello John
:ok
iex(3)> ast = quote do: IO.puts "Hello " <> name
{{:., [], [{:__aliases__, [alias: false], [:IO]}, :puts]}, [],
[{:<>, [context: Elixir, import: Kernel], ["Hello ", {:name, [], Elixir}]}]}
For more context, I recommend reading the introduction to meta-programming in Elixir on Elixir’s official site.
In case you’re interested in parsing Elixir, Tokenizing and parsing in Elixir with yecc and leex by Andrea Leopardi is a very recommended reading.
Have fun with Elixir!
]]>Environment variables are very useful for configuring your app depending on the environment, without having to hardcode any value in the source.
At my current company we are building a microservice infrastructure, where the frontend and the backend are completely decoupled applications. We also use Docker to manage these microservices and link them together. Turns out that storing the configuration in the environment—as opposed to storing it in the database or in the code itself—is quite valuable, as described also in the twelve-factor methodology.
A web page doesn’t have access to OS variables, so you can’t normally use them.
The solution is pretty simple: you just need to generate a file that contains them.
For such a trivial task you could be tempted to use your language of choice, e.g. in JavaScript (Node.js) you have access to process.env.SOME_VAR
. In Python you would probably do os.getenv('SOME_VAR')
and in Ruby you’d use ENV['SOME_VAR']
—but what about some old-school shell scripting? The script could be as simple as:
# bin/env.sh
echo "env = {"
echo " USER: '$USER',"
echo " HOSTNAME: '$HOSTNAME'"
echo "}"
That, when executed, will become:
// env.js
env = {
USER: 'yourname',
HOSTNAME: 'ubuntu'
}
And the script to execute on the shell is:
./bin/env.sh > env.js
Pretty straightforward, isn’t it?
<!DOCTYPE html>
<html>
<head>
...
</head>
<body>
<script src="env.js"></script>
<script>
console.log(env.USER, env.HOSTNAME);
</script>
</body>
</html>
One downside to this approach is that you have to “make a build” every time you change the variables. If you know any workarounds or better solutions, please let me know!
Find the source code on GitHub. Download the zip file here.
Have fun!
]]>“Pong is one of the earliest arcade video games; it is a tennis sports game featuring simple two-dimensional graphics.” - Wikipedia
Have you ever dreamed of building a game in JavaScript? I did, and I also managed to make my first one. Of course I also wrote some tips and gotchas to help you complete this nice challenge.
Pong, at it’s core, is an extremely simple game. That’s why it’s a good one to begin with if you have just started learning game design basics. Of course you could start with many other games, but if you are looking for something relatively simple to build, Pong really is one of the simplest games ever made.
AFAIK, there are at least two ways of doing it: I personally call them the “simple way” and the “hard way”. I did both, but first let’s explore the simple one.
I aimed to make it as simple as possible, so I just created one HTML file that is referencing few JavaScript files. You may ask, why not a whole single file? Because it’s usually preferable to have many little files rather than one massive plate of spaghetti code. So here’s served the project’s structure:
index.html
canvas.js
game.js
keyboard.js
main.js
render.js
reset.js
update.js
index.html is our single entry point to the game.
canvas.js contains the code for initializing the canvas
DOM object and the 2D context.
game.js contains the game objects. This file will be executed only once at the beginning, when the game loads.
keyboard.js has the keyboard bindings.
main.js is perhaps the most important file, because it contains the main game loop.
render.js does… the rendering. (you don’t say?)
reset.js is for resetting the game to the initial state, called every time a player wins.
update.js contains 90% of the game logic, and obviously is for updating the game state (before rendering).
The main loop is at the core of our game. Maybe it’s hard to believe, but virtually every single videogame in the world lives and dies within a loop.
Implementing a game loop is a lot simpler than you think, but it’s not the focus of this tutorial. The resource I highly recommend for getting started is How to make a simple HTML5 Canvas game, by Matt Hackett. All my work is actually based on his tutorial. Read it, and you’ll get a basic understanding of the fundamentals of game development.
We want to focus on the game logic now, so for the time being let’s pretend our game loop looks like this:
while (true) {
update(); // update game objects
render(); // render game objects
}
Got it? :-)
How do we make the ball moving across the screen? In JavaScript, we can define objects with properties. The essential properties of our ball
object are position
and speed
. The position
represents the coordinates where the object is in the canvas space. Example:
var ball = {
x: 0,
y: 0,
speedX: 0,
speedY: 0
}
In order to make it move, we should change its position, and we can do it through the speed. This is the heart of our game:
if (isGameStarted) {
// Ball movement
ball.x += ball.speedX * modifier;
ball.y += ball.speedY * modifier;
}
As you can imagine, isGameStarted
is just a boolean flag. But what’s modifier
? Well, it’s the delta time of our game loop. Put simply, the delta time is the time elapsed between a frame and another. This is very useful because we can use it to calculate how fast the ball should move. Without it, the game would just lag all the time.
The game logic is mainly about the ball: it should be able to bounce away from the paddles. How can you implement that? It’s pretty simple - have a look at the code below.
// Ball is out of the left boundary
if (ball.x <= 0) {
// Player 2 wins!
p2.score++;
reset(); // reset the game to the initial state
}
// Ball is out of the right boundary
if (ball.x >= canvas.width - ball.size) {
// Player 1 wins!
p1.score++;
reset();
}
// Ball is colliding with the top
if (ball.y <= 0) {
ball.speedY = Math.abs(ball.speedY);
}
// Ball is colliding with the bottom
if (ball.y + ball.size >= canvas.height) {
ball.speedY = Math.abs(ball.speedY) * -1; // inverted
}
Can you see what’s going on in the code? Basically, if the ball goes beyond the canvas’ left or right boundaries, all we do is increment the score and reset the game. If the ball touches the top or the bottom instead, we invert its speed on the Y axis. If you think about it, it’s all you need to make something reflect over a surface. So, in other words, if the speed is negative we make it positive, and viceversa.
What should happen when the ball touches one of the paddles? Fundamentally the same thing explained above: it should bounce away, reflecting on the paddle’s surface (and to do this we invert the Y speed). But how do we actually check if they are colliding?
The most common kind of collision detection is called AABB - Axis-Aligned Bounding Boxes. You can find plenty of resources around the Web explaining how this technique works, so I won’t talk about it (have a quick search for “AABB collision detection”, or just keep reading). As Linus Torvalds once said,
“Talk is cheap. Show me the code.”
Here we go:
if (
ball.x <= (p1.x + p1.width)
&& p1.x <= (ball.x + ball.size)
&& ball.y <= (p1.y + p1.height)
&& p1.y <= (ball.y + ball.size)
) {
// Ball is colliding with the left paddle
// Ensure the speed on the X axis is positive
ball.speedX = Math.abs(ball.speedX);
// Give the ball a bit of randomness by
// increasing/decreasing its speed on the Y axis
ball.speedY = randomize();
}
The logic for the right paddle is exactly the same, but the speed on the X axis should be negative instead. In my case I also added a randomize()
function, so the game will be more interesting - you don’t have to implement it this way, but a bit of randomness never hurts in gaming!
function randomize() {
// Random float between 0 and 999.9
var _rand = Math.random() * 1000;
// positive or negative?
return Math.random() > 0.5 ? _rand : _rand * -1;
}
We move the paddles with the keyboard. Keyboard controls can be handled simply by keeping track of which key is currently being pressed (watch for the keydown
event). We can use a simple JavaScript object for that (or an array if you prefer):
// Handle keyboard controls
var keysDown = {};
addEventListener("keydown", function (e) {
keysDown[e.keyCode] = true;
}, false);
addEventListener("keyup", function (e) {
delete keysDown[e.keyCode];
}, false);
The keyup
and keydown
events are the only two we need for handling the whole keyboard. So on keydown
we add the key; on keyup
we remove it. Simple.
Of course we are going to need JavaScript objects for the paddles as well. In my game I called them p1
and p2
, which can be interpreted as players too.
Here’s the code:
// Update game objects
var update = function (modifier) {
if (87 in keysDown) { // P1 holding up (key: w)
p1.y -= p1.speed * modifier;
}
if (83 in keysDown) { // P1 holding down (key: s)
p1.y += p1.speed * modifier;
}
if (38 in keysDown) { // P2 holding up (key: arrow up)
p2.y -= p2.speed * modifier;
}
if (40 in keysDown) { // P2 holding down (key: arrow down)
p2.y += p2.speed * modifier;
}
}
Here’s the render()
function, in all its glory:
var render = function () {
ctx.fillStyle = "#0F0"; // green
// P1
ctx.fillRect(p1.x, p1.y, p1.width, p1.height);
// P2
ctx.fillRect(p2.x, p2.y, p2.width, p2.height);
// ball
ctx.fillRect(ball.x, ball.y, ball.size, ball.size);
// Text options
ctx.fillStyle = "rgb(250, 250, 250)";
ctx.font = "18px Helvetica";
ctx.textAlign = "left";
ctx.textBaseline = "top";
// P1 Score
ctx.fillText(p1.score, 32, 32);
// P2 Score
ctx.fillText(p2.score, canvas.width - 32, 32);
};
It’s probably worth mentioning that you can use JSON.stringify()
to debug your objects directly in the canvas, e.g.:
// Debugging the ball object
ctx.fillText("ball: " + JSON.stringify(ball), 0, 0);
However, I don’t recommend it. Just use whatever your browser is offering! If you are a web developer you surely know that there’s a built-in JavaScript console for debugging in your browser (if you don’t, search for developer tools).
We need to reset the game every time a player scores. The logic is very simple, we just need to provide default values for our objects. Example below.
// Reset the game
var reset = function () {
isGameStarted = false;
ball.x = (canvas.width - ball.size) / 2;
ball.y = (canvas.height - ball.size) / 2;
ball.speedX = randomize(); // randomly start going left or right
ball.speedY = 0;
}
This is the main logic of Pong. However, it’s not perfect, and it could be improved a lot in several ways… for example by implementing physics rules (or by using a physics engine, that has already done the job for us). We have just simulated the reflection of a ball on a surface, but it’s not realistic at all - let’s make it better.
In a proper Pong game, you can usually control where the ball goes. It could have a steeper or shallower angle of reflection, based on where the ball landed. Should it land on one of the edges of the paddle, the collision should be inelastic. In case it lands exactly on the middle of the paddle, the collision should be totally elastic.
In order to implement physics rules in a game, you should have an understanding of basic vector math, trigonometry and - of course - physics. But don’t fear, you don’t have to know everything: just the basics. I personally didn’t know much about physics, but I learned it by reading about it.
Here are some useful resources on the Web:
Let’s explore together the potential of 2D vectors.
The main thing you’ll have to understand is how vectors are used in game development. As an example, let’s go back to our ball
object and modify it to use vectors. It will look like this:
var ball = {
position: new Vector({ x: 0 , y: 0 }),
velocity: new Vector({ x: 0 , y: 0 })
}
Four values at the price of two attributes! And this is a lot better now, not only because we are using less attributes, but because we can use vector math. Believe me, vectors simplify your game a lot.
You may have noticed that I didn’t use speed
, but I used velocity
instead. The reason is that speed
is a scalar quantity, while velocity
is a vector quantity. Put simply, speed
is an information that’s contained in velocity
! You may want to read about it, albeit not directly related to programming.
We can implement proper reflection (not a fake one) by using this JavaScript function:
var ball = {
// the velocity vector
velocity: new Vector(),
/*
* The formula:
*
* R = 2(V · N) * N - V
*
* V: velocity vector
* N: a normalized vector of the plane surface (e.g. paddle or wall)
* R: the reflected velocity vector
*/
deflect: function (N) {
var dot = this.velocity.dot(N);
var v1 = N.multiplyScalar(2 * dot);
this.velocity = v1.subSelf(this.velocity);
}
}
This is how I’ve implemented it by using a vector library I found on the Web (find the source code on GitHub). Given a paddle’s normal, it will reflect any vector, but you have to make sure the paddle’s normal is a unit vector (in other words, it’s normalized).
I hope you enjoyed this article. Who’s following my blog since the beginning will probably remember my first blog post. It was more than 2 years ago, and at that time I was really excited by the idea to build a game with JavaScript. I finally did it, and it has been fun indeed! However, I learned a big lesson: although it was fun, it wasn’t really worth reinventing the wheel.
So, if you got through all this tutorial, first of all congratulations! Secondly, consider using a game engine. Thirdly, maybe consider not using JavaScript… just use whatever you feel comfortable with. For instance, if you like the Ruby language (I do!), you could use Opal, a Ruby to JavaScript compiler.
You can play the game here.
The full source code is on GitHub so you can clone it, fork it and even make your own from scratch, if you feel like it’s worth your time. If you are interested in the simple way, checkout the v1.0 release. The hard way is in the master branch.
As always, if you have any thoughts or questions, feel free to leave a comment below.
Have fun!
]]>This script will install the latest build of Sublime Text 3.
Open your terminal and run:
curl -L git.io/sublimetext | sh
It will install the Package Control as well, so you don’t have to do it yourself.
If you are interested to see the actual code behind, here we go:
https://gist.github.com/simonewebdesign/8507139
It should work on most Linux distros; if not, please let me know by leaving a comment below. I’m here to help.
Enjoy!
Update: When I wrote this script, my motivation was that there was no easy way to install Sublime Text on Linux. However, nowadays there is an official repository providing builds for all the major Linux package managers: see here.
]]>.container
’s width to 960px
is sufficient; well, it’s not quite true.
As per Bootstrap’s docs, you can disable responsiveness by forcing a fixed width to the container:
.container {
width: 960px !important;
}
I love Bootstrap, but personally if I had to build a fixed-width 960px site, which is quite old school nowadays, I wouldn’t use Bootstrap at all. And you know what? In most cases I wouldn’t even use a grid system! I’d use plain-old CSS (or Sass), and I’m pretty confident it would be fine. But that’s me. Of course you are free to do anything you want. But remember, Bootstrap’s focus is on mobile and responsive design.
If you need a 960px grid system, you may not want all the stuff that comes with Bootstrap. Also, you may want to think again about what you are going to build; this is way more important than the front end framework you will choose.
Now, this tutorial is for who wants a 960px site, but still preserving responsiveness.
What I’m going to explain is not a hack, it’s the way Bootstrap works.
When you need to change Bootstrap’s default width, the best way is to recompile its source code. Yeah, that sounds hard and time consuming, but don’t panic.
If you are using Sass or LESS will be very easy to customize the grid system. However it really depends on what framework you are using.
E.g.: if you are using Ruby on Rails, chances are you are using the bootstrap-sass gem. On the README in GitHub there’s already everything you need to know in order to customize Bootstrap. The only thing you have to be aware is that you should redefine variables before importing Bootstrap, otherwise Bootstrap will use the old ones.
These are the correct values for a 960px grid (in Sass):
// default is 1140px + $grid-gutter-width
$container-large-desktop: 940px + $grid-gutter-width;
// default is 30px
$grid-gutter-width: 20px;
You may want to disable the media query for large desktops, you don’t need it anymore.
Changing $screen-lg
to be $screen-md
should do it.
I believe this is the best solution so far. It’s far better than removing all stuff related to large desktops, because:
If you are using CSS you can use the online build customizer. However I recommend you to switch to Sass or LESS.
That’s it!
]]>First of all, make sure you remove RVM completely. It’s not compatible with rbenv.
rm -r ~/.rvm
Remove it from your $PATH
as well.
I’m using fish shell, that has its own quirks, such as it doesn’t have a export
command to export variables to $PATH
. Instead it uses set
. E.g.:
set VARIABLE VALUE
For example, in order to call rbenv
, I set up my $PATH
this way:
set -u fish_user_paths $fish_user_paths ~/.rbenv/bin
Fish also handles things a bit differently. If you are using it, you’ll probably be burned by the fact it doesn’t understand the $
function that in POSIX shells creates a sub shell. Fortunately I managed to find a fix for that: see this article. Basically it says you need to add this code to your config.fish
file:
set -gx RBENV_ROOT /usr/local/var/rbenv
. (rbenv init -|psub)
But pay attention and make sure you understand what’s going on here. Actually the code above didn’t work for me, as the installation path of my rbenv was different. If you installed rbenv with git clone
, the right code is:
set -gx RBENV_ROOT ~/.rbenv
. (rbenv init -|psub)
In fish it’s also possible (albeit not recommended) to use the config.fish
file in order to set the $PATH
variable permanently. You can do it with (e.g.):
set -x PATH ~/.rbenv/shims /usr/local/bin /usr/bin /bin $PATH
A big gotcha here is to have ~/.rbenv/shims
before /bin
and /usr/bin
, otherwise the shell will load the system’s Ruby first (and you don’t want to use the system’s Ruby for your projects).
To ensure I was using the right Ruby version, I moved the system Ruby away, in /tmp
. Of course you need to sudo
for that:
sudo mv /usr/bin/ruby /tmp
Another super important thing is: NEVER EVER install gems using sudo
. If you do that you’re going to have serious problems/conflicts and weird errors in your shell. Do yourself a favour by installing things in your home (~
) and avoiding sudo
at all costs. Always.
A good thing to do for ensuring you are going down the right path is to use which
: which rbenv
, which ruby
and which gem
will tell you if you actually have your stuff in the right place (that is the .rbenv/shims
on your home folder).
At this stage you may be able to install Ruby (you need the ruby-build plugin for that). Run:
rbenv install -l
The command above will give you a list of all the available rubies to install. Run, for example:
rbenv install 2.1.2
rbenv rehash
The above will install Ruby 2.1.2 into ~/.rbenv/versions
and will rebuild your shim files. Note that you need to run rbenv rehash
every time after you install a version of Ruby.
Another useful command is:
rbenv global
This tells you which version of Ruby you have. It may differ from what ruby -v
says to you, and if that’s your case, you’ll probably want to check your $PATH
.
Hopefully that’s enough for getting you started with rbenv. Enjoy!
]]>I did it! I’ve finally migrated my blog to Octopress. It was a bit of a PITA, and it took a lot more than what I expected, but I did it.
Apologies if you weren’t able to see the website yesterday; the DNS took about 11 hours to propagate, and the site was back UP just this morning. It is now hosted by Heroku and it’s faster than ever.
Prepare yourself to see lots of new stuff in the next few weeks! ;-)
]]>You don’t believe me, do you? Fair enough, but let me show you why Ruby is so awesome.
Yes, it’s true. Ruby is so simple and intuitive that you can think in English before writing some Ruby code. For example:
def speak_english
print "Hello, world!"
end
The code above is a Ruby function (or method) declaration. So, when you want to run the speak_english
function, you do it this way:
speak_english
Hello, world! => nil
You may have noticed the nil
: what’s that? It’s just nothing, literally. It represents the void (emptiness, no value at all). In other languages, such as SQL (the mother tongue of databases), you can find it as NULL
.
Do you know about OOP? It means Object-Oriented Programming, and it’s probably the most important programming paradigm ever invented so far. Ruby takes full advantage of OOP. And when I say full, I literally mean: everything, in Ruby, is an object. Even numbers! If you know at least one programming language, say Java, you must be aware of the fact that Java numbers are primitive types, which mean that they’re not objects. In Ruby, things are different.
Let’s make an example. Let’s say you want to use the speak_english
function 3 times. In Java, you’d do something like:
public class HelloWorld
{
public static void main(String[] args)
{
for (int i = 0; i < 3; i++)
{
speakEnglish();
}
}
public static void speakEnglish()
{
System.out.println("Hello, world!");
}
}
So much code for something so simple… in Ruby, instead, you can do this:
3.times do
speak_english
end
Hello, world! Hello, world! Hello, world! => 3
See? I called a function on a number! Cool, isn’t it? And I used only 3 lines of code :-)
I was a PHP developer when I discovered Ruby. Although I had a bit of OOP background, I was used to write PHP code in a procedural style. Procedural code looks something like:
doThis();
doThat();
doSomethingElse();
There’s absolutely nothing wrong with this approach, apart from the fact that it starts being cumbersome, sometimes… because it’s not Object-Oriented. I’ll make one last example, taken from a beautiful StackOverflow’s answer.
Reverse the words in this string:
backwards is sentence This
So the final result must be:
This sentence is backwards
When you think about how you would do it, you’d do the following:
In PHP, you’d do this:
$sentence = "backwards is sentence This";
$splitted = explode(" ", $sentence);
$reversed = array_reverse($splitted);
$rejoined = implode(" ", $reversed);
In Python:
sentence = "backwards is sentence This"
splitted = sentence.split()
reversed = reversed(splitted)
rejoined = " ".join(reversed)
And Ruby:
sentence = "backwards is sentence This"
splitted = sentence.split
reversed = splitted.reverse
rejoined = reversed.join
Every language required 4 lines of code. Now let’s compare the one-liners.
implode(" ", array_reverse(explode(" ", $sentence)));
" ".join(reversed(sentence.split()))
sentence.split.reverse.join " "
Now, can you see the beauty of Ruby? It’s just… magic.
]]>