Few days ago there were about 20 job offer for Haskell. In only one day! How is that possible? As a real haskeller, I find this situation unbearable!
After all, we must avoid success at all cost. And I’ll help SPJ achieve this honorable goal.
Imagine a situation were you see people demonstrating some interest in learning Haskell.
Quick! Prevent them from going further.
If they come from the dynamic (uni-typed) languages like Python, Javascript…:
Haskell? A statically typed language??? Hmm… You mean like C and Java?
Such a remark should immediately shut down any interest in Haskell.
If they want to produce application with them:
Haskell? Isn’t it only a language for student! I don’t think it is useful for REAL WORLD applications!
If they just want to learn something new:
Haskell? Ah yes, I remember, mostly they only have the equivalent of Java interfaces and they stopped there. They don’t even have classes!!!! Can you imagine? I don’t even speak about class inheritance.
We’re in 2016! And they don’t even support basic Object Oriented Programming. What a joke!
If they love low level programming:
Haskell? Ah yes, I heard that lazyness make it impossible to think about code complexity and generally cause lot of space leaks.
And if it is not enough:
Haskell? Ah yes. I’m not fan of their Stop the World GC.
If they come from LISP and the statically typed language remark wasn’t enough. Try to mention the lack of macros in Haskell. Don’t mention template Haskell or even less Generics and all recent progress in GHC.
Many hints there:
stack
, cabal freeze
, … While Nix is great, forcing new user completely alien to all these concepts to first learn it before starting to write their first line of code can greatly reduce their enthusiasm. Bonus point if you make them believe you can only program in Haskell on NixOS.The very first thing to do is to explain how Haskell is so easy to learn. How natural it is for everybody you know. And except someone you always considered very dumb, everybody was very productive in Haskell in few hours.
Use vocabulary alien to them as much as possible. Here is a list of terms you should use in the very first minutes of your description of Haskell:
Each of this term will hopefully be intimidating.
Please don’t provide an obvious first example like:
Instead prefer a fully servant example:
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE TypeOperators #-}
module App where
import Control.Monad.Trans.Except
import Data.Aeson
import GHC.Generics
import Network.Wai
import Network.Wai.Handler.Warp
import Servant
import System.IO
type ItemApi =
"item" :> Get '[JSON] [Item] :<|>
"item" :> Capture "itemId" Integer :> Get '[JSON] Item
itemApi :: Proxy ItemApi
itemApi = Proxy
run :: IO ()
run = do
let port = 3000
settings =
setPort port $
setBeforeMainLoop (hPutStrLn stderr ("listening on port " ++ show port)) $
defaultSettings
runSettings settings =<< mkApp
mkApp :: IO Application
mkApp = return $ serve itemApi server
server :: Server ItemApi
server =
getItems :<|>
getItemById
type Handler = ExceptT ServantErr IO
getItems :: Handler [Item]
getItems = return [exampleItem]
getItemById :: Integer -> Handler Item
getItemById = \ case
0 -> return exampleItem
_ -> throwE err404
exampleItem :: Item
exampleItem = Item 0 "example item"
data Item
= Item {
itemId :: Integer,
itemText :: String
}
deriving (Eq, Show, Generic)
instance ToJSON Item
instance FromJSON Item
This nice example should overflow the number of new concepts a Haskell newcomer should deal with:
:<|>
'[]
instead of []
Proxy
$
deriving
ha ha! You’ll need to explain typeclasses first!getItemById
Of course use the most of your energy explaining the language extensions first. Use a great deal of details and if possible use as much as possible references to Category Theory. You’ll get bonus points if you mention HoTT! Double bonus points if you explain that understanding all details in HoTT is essential to use Haskell on a daily basis.
Explain that what this does is incredible but for the wrong reasons. For example don’t mention why instance ToJSON Item
is great. But insist that we achieved to serve a JSON with extreme elegance and simplicity. Keep insisting on the simplicity and forgot to mention type safety which is one of the main benefit of Servant.
If you’re afraid that this example might be too close to a real world product, you can simply use some advanced lenses examples:
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE TemplateHaskell #-}
import Control.Lens.TH (makePrisms)
import GHC.Generics (Generic)
import Lens.Family.Total
data Example a b c = C1 a | C2 b | C3 c deriving (Generic)
makePrisms ''Example
instance (Empty a, Empty b, Empty c) => Empty (Example a b c)
example :: Example String Char Int -> String
example = _case
& on _C1 (\s -> s )
& on _C2 (\c -> replicate 3 c )
& on _C3 (\n -> replicate n '!')
Certainly a great example to start a new language with.
log :: Level -> String -> IO ()
by (<=.=$$.)
.If the the trends continue toward growth, then we might need to go further at the risk of breaking our own ecosystem:
unsafePerformIO
as much as possibleYes we said, at all cost!
So with all of this I believe we should be on the right track to avoid success at all cost!
Sorry? What?
Oh… Apparently I made a precedence mistake!
SPJ didn’t asked to avoid success $ at all cost
but to avoid $ success at all cost
1.
Sorry! My bad! Forget about all of this. Keep the good work everybody! Haskell is certainly one of the most awesome language in the world! Its community is also just great.
I’m really happy to see it growth every year. Thanks to all contributors making it possible to still have a lot of fun after many years using Haskell!
And the fact that in Haskell the right choice is preferred to the easiest choice, certainly helped.
A good point to use more LISP syntax.↩
tl;dr: Some hints on how to make great documentation for Haskell libraries.
Tutorial
module containing nothing except documentation.Tutorial
module in your cabal
descriptiondoctest
to check your documentation is up to dateGreat documentation make a big difference. A bad documentation could simply make people not using your lib.
My friend was learning Haskell. To start he tried a Haskell library to make a small application. The documentation was deprecated to the point he wasn’t able to make a basic example work. How do you believe he felt? What does he thought about Haskell in general?
So here are my hint on how to make a great documentation in Haskell.
Documentation can take many different form.
Tutorial
(or Guide.GuideTopic
)Tutorial
module content:{-# OPTIONS_GHC -fno-warn-unused-imports #-}
{-|
Use @my-package@ if you want to ...
-}
module Data.Duration.Tutorial (
-- * Introduction
-- $introduction
-- ** Subsection
-- $subsection
) where
import Data.Duration
{- $introduction
So here how you use it:
>>> humanReadableDuration 1002012.002
"11 days 14 hours 20 min 12s 2ms"
The function is 'humanReadableDuration' and
the you'll be able to click on it to go
to its definition.
You can add images: <<path-to-image.png title>>
and links: <http://haskell-lang.org haskell>.
-}
{- $subsection
This is a chuck of documentation
not attached to any particular Haskell
declaration with an untested code block:
> answer = 42
-}
To prevent obsolescence of your tutorial, use doctest
.
That way when you’ll do a stack test
or cabal test
you’ll get errors if some example doesn’t work anymore.
doctest
is a great way to provide examples in your code documentation. These example will then be used as tests. Apparently it comes from Python community.
To use doctest
, this is very simple:
-- | My function description
--
-- >>> myFunction 3 4
-- 7
myFunction :: Int -> Int -> Int
myFunction = (+)
And to make it works simply verify you have a test
bloc in your .cabal
file looking like this:
test-suite doctest
type: exitcode-stdio-1.0
hs-source-dirs: test
main-is: DocTest.hs
build-depend: base >= 4.7 && < 5
, <YOUR_LIBRARY>
, Glob >= 0.7
, doctest >= 0.9.12
and in test/DocTest.hs
simply use
module Main where
import DocTest
import System.FilePath.Glob (glob)
main = glob "src/**/*.hs" >>= docTest
Now stack test
or cabal test
will check the validity of your documentation.
stack install haddock
or cabal install haddock
> haddock src/**/*.hs
Haddock coverage:
100% ( 15 / 15) in 'Data.Duration'
100% ( 3 / 3) in 'Data.Duration.Tutorial'
There are plenty of alternative solution. I provide the one I believe would be used by most people. So if you use github
simply create an account on travis
.
Add a .travis.yml
file in your repo containing the content of the file here and remove the builds you don’t need. It will build your project using a lot of different GHC versions and environemnts.
If you are afraid by such its complexity you might just want to use this one:
sudo: false
addons:
apt:
packages:
- libgmp-dev
# Caching so the next build will be fast too.
cache:
directories:
- $HOME/.stack
before_install:
# Download and unpack the stack executable
- mkdir -p ~/.local/bin
- export PATH=$HOME/.local/bin:$PATH
- travis_retry curl -L https://www.stackage.org/stack/linux-x86_64 | tar xz --wildcards --strip-components=1 -C ~/.local/bin '*/stack'
script:
- stack setup && stack --no-terminal --skip-ghc-check test
Don’t forget to activate your repo in travis.
For some bonus points add the build status badge in your README.md
file:
Congratulation! Now if you break your documentation examples, you’ll get notified.
You could add badges to your README.md
file.
Here is a list of some: shields.io
If you didn’t declared your package to stackage
, please do it. It isn’t much work. Just edit a file to add your package. And you’ll could be able to add another badge:
See Stackage Badges for more informations.
stack
If you use stack
I suggest you to use the tasty-travis
template. It will include the boilerplate for:
So edit your ~/.stack/config.yaml
like this:
templates:
params:
author-name: Your Name
author-email: your@mail.com
copyright: 'Copyright: (c) 2016 Your Name'
github-username: yourusername
category: Development
And then you can create a new projec with:
stack new my-project tasty-travis
Even not doing anything, if you submit your library to hackage, haddock should generate some API documentation for free.
But to make real documentation you need to add some manual annotations.
Functions:
-- | My function description
myFunction :: T1 -- ^ arg1 description
-> T2 -- ^ arg2 description
myFunction arg1 arg2 = ...
Data:
data MyData a b
= C1 a b -- ^ doc for constructor C1
| C2 a b -- ^ doc for constructor C2
data MyData a b
= C { a :: TypeA -- ^ field a description
, b :: TypeB -- ^ field b description
}
Module:
{-|
Module : MyModule
Description: Short description
Copyright : (c)
License : MIT
Here is a longer description of this module.
With some code symbol @MyType@.
And also a block of code:
@
data MyData = C Int Int
myFunction :: MyData -> Int
@
-}
Documentation Structure:
module MyModule (
-- * Classes
C(..),
-- * Types
-- ** A data type
T,
-- ** A record
R,
-- * Some functions
f, g
) where
That will generate headings.
In Haskell we have great tools like hayoo!
and hoogle
.
And hackage
and stackage
provide also a lot of informations.
But generally we lack a lot of Tutorials and Guides. This post was an attempt to help people making more of them.
But there are other good ideas to help improve the situation.
In clojure when you create a new project using lein new my-project
a directory doc
is created for you. It contains a file with a link to this blog post:
If you try to search for some clojure function on a search engine there is a big chance the first result will link to:
clojuredocs.org
: try to search for reduce
, update-in
or index
for exampleFor each symbol necessiting a documentation. You don’t only have the details and standard documentation. You’ll also get:
clojuredocs.org
is an independant website from the official Clojure website.
Most of the time, if you google the function you search you end up on clojuredocs for wich there are many contributions.
Currently stackage is closer to these feature than hackage. Because on stackage you have access to the README and also some comments by package.
I believe it would be more efficient to have at least a page by module and why not a page by symbol (data, functions, typeclasses…).
For example, we could provide details about foldl
for example. Also as there would be less information to display, it will make the design cleaner.
Today, if you want to help documenting, you need to make a PR to the source of some library. While if we had an equivalent to clojuredocs for Haskell, adding documentation would simply be a few clicks away:
There are more than 23k people on /r/haskell
. If only 1% of them would take 10 minutes adding a bit of documentation it will certainly change a lot of things in the percieved documentation quality.
And last but not least,
Design is a vague word. A good design should care not only about how something look, but also how users will interact with it. For example by removing things to focus on the essential.
When I stumble upon some random blog post or random specification in the Haskell community, I had too much a feeling of old fashioned design.
If you look at node.js community lot of their web page look cleaner, easier to read and in the end, more user friendly.
Haskell is very different from node, I wouldn’t like to replace all long and precise documentation with short human unprecise concepts. I don’t want to transform scientific papers by tweets.
But like the scientific community has upgraded with the use of LaTeX, I believe we could find something similar that would make, very clean environment for most of us. A kind of look and feel that will be
tlpl: Comment utiliser vim comme une IDE très efficace
In Learn Vim Progressively I’ve show how Vim is great for editing text, and navigating in the same file (buffer). In this short article you’ll see how I use Vim as an IDE. Mainly by using some great plugins.
There are a lot of Vim plugins. To manage them I use vim-plug
.
To install it:
mkdir -p ~/.vim/autoload
curl -fLo ~/.vim/autoload/plug.vim \
https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim
☞ Note I have two parts in my .vimrc
. The first part contains the list of all my plugins. The second part contains the personal preferences I setted for each plugin. I’ll separate each part by ...
in the code.
Before anything, you should protect your eyes using a readable and low contrast colorscheme.
For this I use solarized dark. To add it, you only have to write this in your ~/.vimrc
file:
call plug#begin('~/.vim/plugged')
Plug 'altercation/vim-colors-solarized'
call plug#end()
" -- solarized personal conf
set background=dark
try
colorscheme solarized
catch
endtry
You should be able to see and destroy trailing whitespaces.
Plug 'bronson/vim-trailing-whitespace'
You can clean trailing whitespace with :FixWhitespace
.
And also you should see your 80th column.
if (exists('+colorcolumn'))
set colorcolumn=80
highlight ColorColumn ctermbg=9
endif
One of the most important hidden skills in programming is the ability to search and find files in your projects.
The majority of people use something like NERDTree
. This is the classical left column with a tree of files of your project. I stopped to use this. And you should probably too.
I switched to unite. No left column lost. Faster to find files. Mainly it works like Spotlight on OS X.
First install ag
(the silver search). If you don’t know ack
or ag
your life is going to be upgraded. This is a simple but essential tool. It is mostly a grep
on steroids.
" Unite
" depend on vimproc
" ------------- VERY IMPORTANT ------------
" you have to go to .vim/plugin/vimproc.vim and do a ./make
" -----------------------------------------
Plug 'Shougo/vimproc.vim'
Plug 'Shougo/unite.vim'
...
let g:unite_source_history_yank_enable = 1
try
let g:unite_source_rec_async_command='ag --nocolor --nogroup -g ""'
call unite#filters#matcher_default#use(['matcher_fuzzy'])
catch
endtry
" search a file in the filetree
nnoremap <space><space> :split<cr> :<C-u>Unite -start-insert file_rec/async<cr>
" reset not it is <C-l> normally
:nnoremap <space>r <Plug>(unite_restart)
Now type space twice. A list of files appears. Start to type some letters of the file you are searching for. Select it, type return and bingo the file opens in a new horizontal split.
If something goes wrong just type <space>r
to reset the unite cache.
Now you are able to search file by name easily and efficiently.
Now search text in many files. For this you use ag
:
Plug 'rking/ag.vim'
...
" --- type ° to search the word in all files in the current dir
nmap ° :Ag <c-r>=expand("<cword>")<cr><cr>
nnoremap <space>/ :Ag
Don’t forget to add a space after the :Ag
.
These are two of the most powerful shortcut for working in a project. using °
which is nicely positioned on my azerty
keyboard. You should use a key close to *
.
So what °
is doing? It reads the string under the cursor and search for it in all files. Really useful to search where a function is used.
If you type <space>/
followed by a string, it will search for all occurrences of this string in the project files.
So with this you should already be able to navigate between files very easily.
Show which line changed since your last commit.
Plug 'airblade/vim-gitgutter'
And the “defacto” git plugin:
Plug 'tpope/vim-fugitive'
You can reset your changes from the latest git commit with :Gread
. You can stage your changes with :Gwrite
.
Plug 'junegunn/vim-easy-align'
...
" Easy align interactive
vnoremap <silent> <Enter> :EasyAlign<cr>
Just select and type Return
then space
. Type Return
many type to change the alignments.
If you want to align the second column, Return
then 2
then space
.
C-n
& C-p
Vim has a basic auto completion system. The shortcuts are C-n
and C-p
while you are in insert mode. This is generally good enough in most cases. For example when I open a file not in my configured languages.
My current Haskell programming environment is great!
Each time I save a file, I get a comment pointing to my errors or proposing me how to improve my code.
So here we go:
☞ Don’t forget to install
ghc-mod
with:cabal install ghc-mod
" ---------- VERY IMPORTANT -----------
" Don't forget to install ghc-mod with:
" cabal install ghc-mod
" -------------------------------------
Plug 'scrooloose/syntastic' " syntax checker
" --- Haskell
Plug 'yogsototh/haskell-vim' " syntax indentation / highlight
Plug 'enomsg/vim-haskellConcealPlus' " unicode for haskell operators
Plug 'eagletmt/ghcmod-vim'
Plug 'eagletmt/neco-ghc'
Plug 'Twinside/vim-hoogle'
Plug 'pbrisbin/html-template-syntax' " Yesod templates
...
" -------------------
" Haskell
" -------------------
let mapleader="-"
let g:mapleader="-"
set tm=2000
nmap <silent> <leader>ht :GhcModType<CR>
nmap <silent> <leader>hh :GhcModTypeClear<CR>
nmap <silent> <leader>hT :GhcModTypeInsert<CR>
nmap <silent> <leader>hc :SyntasticCheck ghc_mod<CR>:lopen<CR>
let g:syntastic_mode_map={'mode': 'active', 'passive_filetypes': ['haskell']}
let g:syntastic_always_populate_loc_list = 1
nmap <silent> <leader>hl :SyntasticCheck hlint<CR>:lopen<CR>
" Auto-checking on writing
autocmd BufWritePost *.hs,*.lhs GhcModCheckAndLintAsync
" neocomplcache (advanced completion)
autocmd BufEnter *.hs,*.lhs let g:neocomplcache_enable_at_startup = 1
function! SetToCabalBuild()
if glob("*.cabal") != ''
set makeprg=cabal\ build
endif
endfunction
autocmd BufEnter *.hs,*.lhs :call SetToCabalBuild()
" -- neco-ghc
let $PATH=$PATH.':'.expand("~/.cabal/bin")
Just enjoy!
I use -
for my leader because I use ,
a lot for its native usage.
-ht
will highlight and show the type of the block under the cursor.-hT
will insert the type of the current block.-hh
will unhighlight the selection.My main language at work is Clojure. And my current vim environment is quite good. I lack the automatic integration to lein-kibit
thought. If I have the courage I might do it myself one day. But due to the very long startup time of clojure, I doubt I’ll be able to make a useful vim plugin.
So mainly you’ll have real rainbow-parenthesis (the default values are broken for solarized).
I used the vim paredit
plugin before. But it is too restrictive. Now I use sexp
which feel more coherent with the spirit of vim.
" " -- Clojure
Plug 'kien/rainbow_parentheses.vim'
Plug 'guns/vim-clojure-static'
Plug 'guns/vim-sexp'
Plug 'tpope/vim-repeat'
Plug 'tpope/vim-fireplace'
...
autocmd BufEnter *.cljs,*.clj,*.cljs.hl RainbowParenthesesActivate
autocmd BufEnter *.cljs,*.clj,*.cljs.hl RainbowParenthesesLoadRound
autocmd BufEnter *.cljs,*.clj,*.cljs.hl RainbowParenthesesLoadSquare
autocmd BufEnter *.cljs,*.clj,*.cljs.hl RainbowParenthesesLoadBraces
autocmd BufEnter *.cljs,*.clj,*.cljs.hl setlocal iskeyword+=?,-,*,!,+,/,=,<,>,.,:
" -- Rainbow parenthesis options
let g:rbpt_colorpairs = [
\ ['darkyellow', 'RoyalBlue3'],
\ ['darkgreen', 'SeaGreen3'],
\ ['darkcyan', 'DarkOrchid3'],
\ ['Darkblue', 'firebrick3'],
\ ['DarkMagenta', 'RoyalBlue3'],
\ ['darkred', 'SeaGreen3'],
\ ['darkyellow', 'DarkOrchid3'],
\ ['darkgreen', 'firebrick3'],
\ ['darkcyan', 'RoyalBlue3'],
\ ['Darkblue', 'SeaGreen3'],
\ ['DarkMagenta', 'DarkOrchid3'],
\ ['Darkblue', 'firebrick3'],
\ ['darkcyan', 'SeaGreen3'],
\ ['darkgreen', 'RoyalBlue3'],
\ ['darkyellow', 'DarkOrchid3'],
\ ['darkred', 'firebrick3'],
\ ]
Working with Clojure will becomre quite smoother. You can eval any part of your code, you must launch a Clojure REPL manually in another terminal thought.
I hope it will be useful.
Last but not least, if you want to use my vim configuration you can get it here:
]]>tlpl: Pour installer Haskell (OS X et Linux) copiez/collez les lignes suivante dans un terminal :
curl https://raw.githubusercontent.com/yogsototh/install-haskell/master/install-haskell.sh | sudo zsh
Si vous êtes sous windows, téléchargez Haskell Platform et suivez les instructions pour utiliser Haskell LTS.
Si vous voulez savoir le pourquoi et le comment ; lisez le reste de l’article.
La plus grande faiblesse d’Haskell n’a rien à voir avec le langage en lui-même mais avec son écosystème.
The main problem I’ll try to address is the one known as cabal hell. The community is really active in fixing the issue. I am very confident that in less than a year this problem will be one of the past. But to work today, I provide an install method that should reduce greatly two effects of cabal hell:
With my actual installation method, you should minimize your headache and almost never hit a dependency error. But there could exists some. If you encounter any dependency error, ask gently to the package manager to port its package to stackage.
So to install copy/paste the following three lines in your terminal:
curl https://raw.githubusercontent.com/yogsototh/install-haskell/master/install-haskell.sh | sudo zsh
You can read the script and you will see that this is quite straightforward.
GHC
binary for you system and install it.cabal
program.As the version of libraries is fixed up until you update the Haskell LTS version, you should never use cabal sandbox. That way, you will only compile each needed library once. The compiled objects/binaries will be in your ~/.cabal
directory.
This script use the latest Haskell LTS. So if you use this script at different dates, the Haskell LTS might have changed.
While it comes to cabal hell, some solutions are sandboxes and nix
. Unfortunately, sandboxes didn’t worked good enough for me after some time. Furthermore, sandboxes forces you to re-compile everything by project. If you have three yesod projects for example it means a lot of time and CPU. Also, nix
didn’t worked as expected on OS X. So fixing the list of package to a stable list of them seems to me the best pragmatic way to handle the problem today.
From my point of view, Haskell LTS is the best step in the right direction. The actual cabal hell problem is more a human problem than a tool problem. This is a bias in most programmer to prefer resolve social issues using tools. There is nothing wrong with hackage and cabal. But for a package manager to work in a static typing language as Haskell, packages must work all together. This is a great strength of static typed languages that they ensure that a big part of the API between packages are compatible. But this make the job of package managing far more difficult than in dynamic languages.
People tend not to respect the rules in package numbering1. They break their API all the time. So we need a way to organize all of that. And this is precisely what Haskell LTS provide. A set of stable packages working all together. So if a developer break its API, it won’t work anymore in stackage. And whether the developer fix its package or all other packages upgrade their usage. During this time, Haskell LTS end-users will be able to develop without dependency issues.
The image of the cat about to jump that I slightly edited can found here
I myself am guilty of such behavior. It was a beginner error.↩
tlpl: Apprenez comment commencer un nouveau projet Haskell. Avec en exemple le créateur de projet Haskell lui-même.
“Good Sir Knight, will you come with me to Camelot, and join us at the Round Table?”
In order to work properly with Haskell you need to initialize your environment. Typically, you need to use a cabal file, create some test for your code. Both, unit test and propositional testing (random and exhaustive up to a certain depth). You need to use git
and generally hosting it on github. Also, it is recommended to use cabal sandboxes. And as bonus, an auto-update tool that recompile and retest on each file save.
In this article, we will create such an environment using a zsh script. Then we will write a Haskell project which does the same work as the zsh script. You will then see how to work in such an environment.
If you are starting to understand Haskell but consider yourself a beginner, this tutorial will show you how to make a real application using quite surprisingly a lot of features:
☞ zsh is by its nature more suitable to file manipulation. But the Haskell code is clearly more organized while quite terse for a multi-purpose language.
☞ holy-project is on hackage. It can be installed with cabal update && cabal install holy-project
.
I recently read this excellent article: How to Start a New Haskell Project.
While the article is very good, I lacked some minor informations1. Inspired by it, I created a simple script to initialize a new Haskell project. During the process I improved some things a bit.
If you want to use this script, the steps are:
cabal-install
(at least 1.18)# Download the script
git clone https://github.com/yogsototh/init-haskell-project.git
# Copy the script in a directory of you PATH variable
cp init-haskell-project/holy-haskell.sh ~/bin
# Go to the directory containing all your projects
cd my/projects/directory
# Launch thcript
holy-haskell.sh
What does this script do that cabal init
doesn’t do?
git
with the right .gitignore
file.tasty
to organize your tests (HUnit, QuickCheck and SmallCheck).-Wall
for ghc
compilation.zsh
really?Developing the script in zsh
was easy. But considering its size, it is worth to rewrite it in Haskell. Furthermore, it will be a good exercise.
In a first time, we initialize a new Haskell project with holy-haskell.sh
:
> ./holy-haskell.sh Bridgekeeper: Stop! Bridgekeeper: Who would cross the Bridge of Death Bridgekeeper: must answer me these questions three, Bridgekeeper: ere the other side he see. You: Ask me the questions, bridgekeeper, I am not afraid. Bridgekeeper: What is the name of your project? > Holy project Bridgekeeper: What is your name? (Yann Esposito (Yogsototh)) > Bridgekeeper: What is your email? (Yann.Esposito@gmail.com) > Bridgekeeper: What is your github user name? (yogsototh) > Bridgekeeper: What is your project in less than ten words? > Start your Haskell project with cabal, git and tests. Initialize git Initialized empty Git repository in .../holy-project/.git/ Create files .gitignore holy-project.cabal Setup.hs LICENSE (MIT) test/Test.hs test/HolyProject/Swallow/Test.hs src/HolyProject/Swallow.hs test/HolyProject/Coconut/Test.hs src/HolyProject/Coconut.hs src/HolyProject.hs src/Main.hs Cabal sandboxing, install and test ... many compilations lines ... Running 1 test suites... Test suite Tests: RUNNING... Test suite Tests: PASS Test suite logged to: dist/test/holy-project-0.1.0.0-Tests.log 1 of 1 test suites (1 of 1 test cases) passed. All Tests Swallow swallow test: OK coconut coconut: OK coconut property: OK 148 tests completed All 3 tests passed Bridgekeeper: What... is the air-speed velocity of an unladen swallow? You: What do you mean? An African or European swallow? Bridgekeeper: Huh? I... I don't know that. [the bridgekeeper is thrown over] Bridgekeeper: Auuuuuuuuuuuugh Sir Bedevere: How do you know so much about swallows? You: Well, you have to know these things when you're a king, you know.
The different steps are:
Features to note:
~/.gitconfig
file in order to provide a default name and email.So, apparently nothing too difficult to achieve.
We should now have an initialized Haskell environment for us to work. The first thing you should do, is to go into this new directory and launch ‘./auto-update’ in some terminal. I personally use tmux
on Linux or the splits in iTerm 2
on Mac OS X. Now, any modification of a source file will relaunch a compilation and a test.
To print the introduction text in zsh
:
# init colors
autoload colors
colors
for COLOR in RED GREEN YELLOW BLUE MAGENTA CYAN BLACK WHITE; do
eval $COLOR='$fg_no_bold[${(L)COLOR}]'
eval BOLD_$COLOR='$fg_bold[${(L)COLOR}]'
done
eval RESET='$reset_color'
# functions
bk(){print -- "${GREEN}Bridgekeeper: $*${RESET}"}
bkn(){print -n -- "${GREEN}Bridgekeeper: $*${RESET}"}
you(){print -- "${YELLOW}You: $*${RESET}"}
...
# the introduction dialog
bk "Stop!"
bk "Who would cross the Bridge of Death"
bk "must answer me these questions three,"
bk "ere the other side he see."
you "Ask me the questions, bridgekeeper, I am not afraid.\n"
...
# the final dialog
print "\n\n"
bk "What... is the air-speed velocity of an unladen swallow?"
you "What do you mean? An African or European swallow?"
bk "Huh? I... I don't know that."
log "[the bridgekeeper is thrown over]"
bk "Auuuuuuuuuuuugh"
log "Sir Bedevere: How do you know so much about swallows?"
you "Well, you have to know these things when you're a king, you know."
In the first Haskell version I don’t use colors. We see we can almost copy/paste. I just added the types.
bk :: String -> IO ()
bk str = putStrLn $ "Bridgekeeper: " ++ str
bkn :: String -> IO ()
bkn str = pustStr $ "Bridgekeeper: " ++ str
you :: String -> IO ()
you str = putStrLn $ "You: " ++ str
intro :: IO ()
intro = do
bk "Stop!"
bk "Who would cross the Bridge of Death"
bk "must answer me these questions three,"
bk "ere the other side he see."
you "Ask me the questions, bridgekeeper, I am not afraid.\n"
end :: IO ()
end = do
putStrLn "\n\n"
bk "What... is the air-speed velocity of an unladen swallow?"
you "What do you mean? An African or European swallow?"
bk "Huh? I... I don't know that."
putStrLn "[the bridgekeeper is thrown over]"
bk "Auuuuuuuuuuuugh"
putStrLn "Sir Bedevere: How do you know so much about swallows?"
you "Well, you have to know these things when you're a king, you know."
Now let’s just add the colors using the ansi-terminal
package. So we have to add ansi-terminal
as a build dependency in our cabal file.
Edit holy-project.cabal
to add it.
...
build-depends: base >=4.6 && <4.7
, ansi-terminal
...
Now look at the modified Haskell code:
import System.Console.ANSI
colorPutStr :: Color -> String -> IO ()
colorPutStr color str = do
setSGR [ SetColor Foreground Dull color
, SetConsoleIntensity NormalIntensity
]
putStr str
setSGR []
bk :: String -> IO ()
bk str = colorPutStr Green ("Bridgekeeper: " ++ str ++ "\n")
bkn :: String -> IO ()
bkn str = colorPutStr Green ("Bridgekeeper: " ++ str)
you :: String -> IO ()
you str = colorPutStr Yellow ("You: " ++ str ++ "\n")
intro :: IO ()
intro = do
bk "Stop!"
bk "Who would cross the Bridge of Death"
bk "must answer me these questions three,"
bk "ere the other side he see."
you "Ask me the questions, bridgekeeper, I am not afraid.\n"
end :: IO ()
end = do
putStrLn "\n\n"
bk "What... is the air-speed velocity of an unladen swallow?"
you "What do you mean? An African or European swallow?"
bk "Huh? I... I don't know that."
putStrLn "[the bridgekeeper is thrown over]"
bk "Auuuuuuuuuuuugh"
putStrLn "Sir Bedevere: How do you know so much about swallows?"
you "Well, you have to know these things when you're a king, you know."
We could put this code in src/Main.hs
. Declare a main function:
Make cabal install
and run cabal run
(or ./.cabal-sandbox/bin/holy-project
). It works!
In order to ask questions, here is how we do it in shell script:
If we want to abstract things a bit, the easiest way in shell is to use a global variable2 which will get the value of the user input like this:
answer=""
ask(){
local info="$1"
bk "What is your $info?"
print -n "> "
read answer
}
...
ask name
name="$answer"
In Haskell we won’t need any global variable:
import System.IO (hFlush, stdout)
...
ask :: String -> IO String
ask info = do
bk $ "What is your " ++ info ++ "?"
putStr "> "
hFlush stdout -- Because we want to ask on the same line.
getLine
Now our main function might look like:
main = do
intro
_ <- ask "project name"
_ <- ask "name"
_ <- ask "email"
_ <- ask "github account"
_ <- ask "project in less than a dozen word"
end
You could test it with cabal install
and then ./.cabal-sandbox/bin/holy-project
.
We will see later how to guess the answer using the .gitconfig
file and the github API.
I don’t really like the ability to use capital letter in a package name. So in shell I transform the project name like this:
In order to achieve the same result in Haskell (don’t forget to add the split
package):
import Data.Char (toLower)
import Data.List (intercalate)
import Data.List.Split (splitOneOf)
...
projectNameFromString :: String -> String
projectNameFromString str = intercalate "-" (splitOneOf " -" (map toLower str))
One important thing to note is that in zsh the transformation occurs on strings but in haskell we use list as intermediate representation:
zsh:
"Holy grail" ==( ${project:gs/ /-/} )=> "Holy-grail"
==( ${project:l} )=> "holy-grail"
haskell
"Holy grail" ==( map toLower )=> "holy grail"
==( splitOneOf " -" )=> ["holy","grail"]
==( intercalate "-" )=> "holy-grail"
The module name is a capitalized version of the project name where we remove dashes.
# Capitalize a string
capitalize(){
local str="$(print -- "$*" | sed 's/-/ /g')"
print -- ${(C)str} | sed 's/ //g'
}
-- | transform a chain like "Holy project" in "HolyProject"
capitalize :: String -> String
capitalize str = concatMap capitalizeWord (splitOneOf " -" str)
where
capitalizeWord :: String -> String
capitalizeWord (x:xs) = toUpper x:map toLower xs
capitalizeWord _ = []
The haskell version is made by hand where zsh already had a capitalize operation on string with many words. Here is the difference between the shell and haskell way (note I splitted the effect of concatMap
as map
and concat
):
shell:
"Holy-grail" ==( sed 's/-/ /g' )=> "Holy grail"
==( ${(C)str} )=> "Holy Grail"
==( sed 's/ //g' )=> "HolyGrail"
haskell:
"Holy-grail" ==( splitOneOf " -" )=> ["Holy","grail"]
==( map capitalizeWord )=> ["Holy","Grail"]
==( concat )=> "HolyGrail"
As the preceding example, in shell we work on strings while Haskell use temporary lists representations.
Also I want to be quite restrictive on the kind of project name we can give. This is why I added a check function.
ioassert :: Bool -> String -> IO ()
ioassert True _ = return ()
ioassert False str = error str
main :: IO ()
main = do
intro
project <- ask "project name"
ioassert (checkProjectName project)
"Use only letters, numbers, spaces and dashes please"
let projectname = projectNameFromString project
modulename = capitalize project
Which verify the project name is not empty and use only letter, numbers and dashes:
-- | verify if project name is conform
checkProjectName :: String -> Bool
checkProjectName [] = False
checkProjectName str =
all (\c -> isLetter c || isNumber c || c=='-' || c==' ') str
Making a project will consists in creating files and directories whose name and content depends on the answer we had until now.
In shell, for each file to create, we used something like:
In Haskell, while possible, we shouldn’t put the file content in the source code. We have a relatively easy way to include external file in a cabal package. This is what we will be using.
Furthermore, we need a templating system to replace small part of the static file by computed values. For this task, I choose to use hastache
, a Haskell implementation of Mustache templates3.
Cabal provides a way to add files which are not source files to a package. You simply have to add a Data-Files:
entry in the header of the cabal file:
data-files: scaffold/LICENSE
, scaffold/Setup.hs
, scaffold/auto-update
, scaffold/gitignore
, scaffold/interact
, scaffold/project.cabal
, scaffold/src/Main.hs
, scaffold/src/ModuleName.hs
, scaffold/src/ModuleName/Coconut.hs
, scaffold/src/ModuleName/Swallow.hs
, scaffold/test/ModuleName/Coconut/Test.hs
, scaffold/test/ModuleName/Swallow/Test.hs
, scaffold/test/Test.hs
Now we simply have to create our files at the specified path. Here is for example the first lines of the LICENSE file.
The MIT License (MIT)
Copyright (c) {{year}} {{author}}
Permission is hereby granted, free of charge, to any person obtaining a copy
...
It will be up to our program to replace the {{year}}
and {{author}}
at runtime. We have to find the files. Cabal will create a module named Paths_holy_project
. If we import this module we have the function genDataFileName
at our disposal. Now we can read the files at runtime like this:
...
do
pkgFilePath <- getDataFileName "scaffold/LICENSE"
templateContent <- readFile pkgFilePath
...
A first remark is for portability purpose we shouldn’t use String for file path. For example on Windows /
isn’t considered as a subdirectory character. To resolve this problem we will use FilePath
:
import System.Directory
import System.FilePath.Posix (takeDirectory,(</>))
...
createProject ... = do
...
createDirectory projectName -- mkdir
setCurrentDirectory projectName -- cd
genFile "LICENSE" "LICENSE"
genFile "gitignore" ".gitignore"
genFile "src/Main.hs" ("src" </> "Main.hs")
genFile dataFilename outputFilename = do
pkgfileName <- getDataFileName ("scaffold/" ++ filename)
template <- readFile pkgfileName
transformedFile <- ??? -- hastache magic here
createDirectoryIfMissing True (takeDirectory outputFileName)
writeFile outputFileName transformedFile
In order to use hastache we can either create a context manually or use generics to create a context from a record. This is the last option we will show here. So in a first time, we need to import some modules and declare a record containing all necessary informations to create our project.
{-# LANGUAGE DeriveDataTypeable #-}
...
import Data.Data
import Text.Hastache
import Text.Hastache.Context
import qualified Data.ByteString as BS
import qualified Data.ByteString.Lazy.Char8 as LZ
data Project = Project {
projectName :: String
, moduleName :: String
, author :: String
, mail :: String
, ghaccount :: String
, synopsis :: String
, year :: String
} deriving (Data, Typeable)
Once we have declared this, we should populate our Project record with the data provided by the user. So our main function should look like:
main :: IO ()
main = do
intro
project <- ask "project name"
ioassert (checkProjectName project)
"Use only letters, numbers, spaces and dashes please"
let projectname = projectNameFromString project
modulename = capitalize project
in_author <- ask "name"
in_email <- ask "email"
in_ghaccount <- ask "github account"
in_synopsis <- ask "project in less than a dozen word?"
current_year <- getCurrentYear
createProject $ Project projectname modulename in_author in_email
in_ghaccount in_synopsis current_year
end
Finally we could use hastache this way:
createProject :: Project -> IO ()
createProject p = do
let context = mkGenericContext p
createDirectory (projectName p)
setCurrentDirectory (projectName p)
genFile context "gitignore" $ ".gitignore"
genFile context "project.cabal" $ (projectName p) ++ ".cabal"
genFile context "src/Main.hs") $ "src" </> "Main.hs"
...
genFile :: MuContext IO -> FilePath -> FilePath -> IO ()
genFile context filename outputFileName = do
pkgfileName <- getDataFileName ("scaffold/"++filename)
template <- BS.readFile pkgfileName
transformedFile <- hastacheStr defaultConfig template context
createDirectoryIfMissing True (takeDirectory outputFileName)
LZ.writeFile outputFileName transformedFile
We use external files in mustache format. We ask question to our user to fill a data structure. We use this data structure to create a context. Hastache use this context with the external files to create the project files.
We need to initialize git and cabal. For this we simply call external command with the system
function.
import System.Cmd
...
main = do
...
_ <- system "git init ."
_ <- system "cabal sandbox init"
_ <- system "cabal install"
_ <- system "cabal test"
_ <- system $ "./.cabal-sandbox/bin/test-" ++ projectName
Our job is almost finished. Now, we only need to add some nice feature to make the application more enjoyable.
The first one would be to add a better error message.
import System.Random
holyError :: String -> IO ()
holyError str = do
r <- randomIO
if r
then
do
bk "What... is your favourite colour?"
you "Blue. No, yel..."
putStrLn "[You are thrown over the edge into the volcano]"
you "You: Auuuuuuuuuuuugh"
bk " Hee hee heh."
else
do
bk "What is the capital of Assyria?"
you "I don't know that!"
putStrLn "[You are thrown over the edge into the volcano]"
you "Auuuuuuuuuuuugh"
error ('\n':str)
And also update where this can be called
.gitconfig
We want to retrieve the ~/.gitconfig
file content and see if it contains a name and email information. We will need to access to the HOME
environment variable. Also, as we use bytestring package for hastache, let’s take advantage of this library.
import Data.Maybe (fromJust)
import System.Environment (getEnv)
import Control.Exception
import System.IO.Error
import Control.Monad (guard)
safeReadGitConfig :: IO LZ.ByteString
safeReadGitConfig = do
e <- tryJust (guard . isDoesNotExistError)
(do
home <- getEnv "HOME"
LZ.readFile $ home ++ "/.gitconfig" )
return $ either (const (LZ.empty)) id e
...
main = do
gitconfig <- safeReadGitConfig
let (name,email) = getNameAndMail gitconfig
project <- ask "project name" Nothing
...
in_author <- ask "name" name
...
We could note I changed the ask function slightly to take a maybe parameter.
ask :: String -> Maybe String -> IO String
ask info hint = do
bk $ "What is your " ++ info ++ "?" ++ (maybe "" (\h -> " ("++h++")") hint)
...
Concerning the parsing of .gitconfig
, it is quite minimalist.
getNameAndMail :: LZ.ByteString -> (Maybe String,Maybe String)
getNameAndMail gitConfigContent = (getFirstValueFor splitted "name",
getFirstValueFor splitted "email")
where
-- make lines of words
splitted :: [[LZ.ByteString]]
splitted = map LZ.words (LZ.lines gitConfigContent)
-- Get the first line which start with
-- 'elem =' and return the third field (value)
getFirstValueFor :: [[LZ.ByteString]] -> String -> Maybe String
getFirstValueFor splitted key = firstJust (map (getValueForKey key) splitted)
-- return the first Just value of a list of Maybe
firstJust :: (Eq a) => [Maybe a] -> Maybe a
firstJust l = case dropWhile (==Nothing) l of
[] -> Nothing
(j:_) -> j
-- Given a line of words ("word1":"word2":rest)
-- getValue will return rest if word1 == key
-- 'elem =' or Nothing otherwise
getValueForKey :: String -- key
-> [LZ.ByteString] -- line of words
-> Maybe String -- the value if found
getValueForKey el (n:e:xs) = if (n == (LZ.pack el)) && (e == (LZ.pack "="))
then Just (LZ.unpack (LZ.unwords xs))
else Nothing
getValueForKey _ _ = Nothing
We could notice, getNameAndMail
doesn’t read the full file and stop at the first occurrence of name and mail.
The task seems relatively easy, but we’ll see there will be some complexity hidden. Make a request on https://api.github.com/search/users?q=<email>
. Parse the JSON and get the login
field of the first item.
So the first problem to handle is to connect an URL. For this we will use the http-conduit
package.
Generally, for simple request, we should use:
But, after some research, I discovered we must declare an User-Agent in the HTTP header to be accepted by the github API. So we have to change the HTTP Header, and our code became slightly more complex:
{-# LANGUAGE OverloadedStrings #-}
...
simpleHTTPWithUserAgent :: String -> IO LZ.ByteString
simpleHTTPWithUserAgent url = do
r <- parseUrl url
let request = r { requestHeaders = [ ("User-Agent","HTTP-Conduit") ] }
withManager $ (return.responseBody) <=< httpLbs request
getGHUser :: String -> IO (Maybe String)
getGHUser "" = return Nothing
getGHUser email = do
let url = "https://api.github.com/search/users?q=" ++ email
body <- simpleHTTPWithUserAgent url
...
So now, we have a String containing a JSON representation. In javascript we would have used login=JSON.parse(body).items[0].login
. How does Haskell will handle it (knowing the J in JSON is for Javascript)?
First we will need to add the lens-aeson
package and use it that way:
import Control.Lens.Operators ((^?))
import Control.Lens.Aeson
import Data.Aeson.Encode (fromValue)
import qualified Data.Text.Lazy as TLZ
import qualified Data.Text.Lazy.Builder as TLB
getGHUser :: String -> IO (Maybe String)
getGHUser email = do
let url = "https://api.github.com/search/users?q=" ++ email
body <- simpleHTTPWithUserAgent url
let login = body ^? key "items" . nth 0 . key "login"
return $ fmap jsonValueToString login
where
jsonValueToString = TLZ.unpack . TLB.toLazyText . fromValue
It looks ugly, but it’s terse. In fact each function (^?)
, key
and nth
has some great mathematical properties and everything is type safe. Unfortunately I had to make my own jsonValueToString
. I hope I simply missed a simpler existing function.
You can read this article on lens-aeson
and prisms to know more.
We now have all the feature provided by the original zsh
script shell. But here is a good occasion to use some Haskell great feature.
We will launch the API request sooner and in parallel to minimize our wait time:
import Control.Concurrent
...
main :: IO ()
main = do
intro
gitconfig <- safeReadGitConfig
let (name,email) = getNameAndMail gitconfig
earlyhint <- newEmptyMVar
maybe (putMVar earlyhint Nothing) -- if no email found put Nothing
(\hintmail -> do -- in the other case request the github API
forkIO (putMVar earlyhint =<< getGHUser hintmail)
return ())
email
project <- ask "project name" Nothing
ioassert (checkProjectName project)
"Use only letters, numbers, spaces and dashes please"
let projectname = projectNameFromString project
modulename = capitalize project
in_author <- ask "name" name
in_email <- ask "email" email
ghUserHint <- if maybe "" id email /= in_email
then getGHUser in_email
else takeMVar earlyhint
in_ghaccount <- ask "github account" ghUserHint
in_synopsis <- ask "project in less than a dozen word?" Nothing
current_year <- getCurrentYear
createProject $ Project projectname modulename in_author in_email
in_ghaccount in_synopsis current_year
end
While it might feel a bit confusing, it is in fact quite simple.
MVar
. Mainly a variable which either is empty or contains something.MVar
.MVar
.If you have a github account and had set correctly your .gitconfig
, you might not even wait.
We have a working product. But, I don’t consider our job finished. The code is about 335 lines.
Considering that we:
.gitconfig
This is quite few.
For short programs it is not obvious to split them into different modules. But my personal preference is to split it anyway.
First we put all content of src/Main.hs
in src/HolyProject.hs
. We rename the main
function by holyStarter
. And our src/Main.hs
should contains:
Of course you have to remember to rename the module of src/HolyProject.hs
. I separated all functions in different submodules:
HolyProject.GitConfig
getNameAndMailFromGitConfig
: retrieve name an email from .gitconfig
fileHolyProject.GithubAPI
searchGHUser
: retrieve github user name using github API.HolyProject.MontyPython
bk
: bridge keeper speaksyou
: you speakask
: Ask a question and wait for an answerHolyProject.StringUtils
: String helper functions
projectNameFromString
capitalize
checkProjectName
The HolyProject.hs
file contains mostly the code that ask questions, show errors and copy files using hastache.
One of the benefits in modularizing the code is that our main code is clearer. Some functions are declared only in a module and are not exported. This help us hide technical details. For example, the modification of the HTTP header to use the github API.
We didn’t take much advantage of the project structure yet. A first thing is to generate some documentation. Before most function I added comment starting with -- |
. These comment will be used by haddock to create a documentation. First, you need to install haddock
manually.
Be sure to have haddock
in your PATH
. You could for example add it like this:
And if you are at the root of your project you’ll get it. And now just launch:
And magically, you’ll have a documentation in dist/doc/html/holy-project/index.html
.
While the Haskell static typing is quite efficient to prevent entire classes of bugs, Haskell doesn’t discard the need to test to minimize the number of bugs.
It is generally said to test we should use unit testing for code in IO and QuickCheck or SmallCheck for pure code.
A unit test example on pure code is in the file test/HolyProject/Swallow/Test.hs
:
module HolyProject.Swallow.Test
(swallowSuite)
where
import Test.Tasty (testGroup, TestTree)
import Test.Tasty.HUnit
import HolyProject.Swallow (swallow)
swallowSuite :: TestTree
swallowSuite = testGroup "Swallow"
[testCase "swallow test" testSwallow]
-- in Swallow: swallow = (++)
testSwallow :: Assertion
testSwallow = "something" @=? swallow "some" "thing"
Note swallow
is (++)
. We group tests by group. Each group can contain some test suite. Here we have a test suite with only one test. The (@=?)
verify the equality between its two parameters.
So now, we could safely delete the directory test/HolyProject/Swallow
and the file src/HolyProject/Swallow.hs
. And we are ready to make our own real world unit test. We will first test the module HolyProject.GithubAPI
. Let’s create a file test/HolyProject/GithubAPI/Test.hs
with the following content:
module HolyProject.GithubAPI.Test
( githubAPISuite
) where
import Test.Tasty (testGroup, TestTree)
import Test.Tasty.HUnit
import HolyProject.GithubAPI
githubAPISuite :: TestTree
githubAPISuite = testGroup "GithubAPI"
[ testCase "Yann" $ ioTestEq
(searchGHUser "Yann.Esposito@gmail.com")
(Just "\"yogsototh\"")
, testCase "Jasper" $ ioTestEq
(searchGHUser "Jasper Van der Jeugt")
(Just "\"jaspervdj\"")
]
-- | Test if some IO action returns some expected value
ioTestEq :: (Eq a, Show a) => IO a -> a -> Assertion
ioTestEq action expected = action >>= assertEqual "" expected
You have to modify your cabal file. More precisely, you have to add HolyProject.GithubAPI
in the exposed modules of the library secion). You also have to update the test/Test.hs
file to use GithubAPI
instead of Swallow
.
So we have our example of unit testing using IO. We search the github nickname for some people I know and we verify github continue to give the same answer as expected.
When it comes to pure code, a very good method is to use QuickCheck and SmallCheck. SmallCheck will verify all cases up to some depth about some property. While QuickCheck will verify some random cases.
As this kind of verification of property is mostly doable on pure code, we will test the StringUtils
module.
So don’t forget to declare HolyProject.StringUtils
in the exposed modules in the library section of your cabal file. Remove all references to the Coconut
module.
Modify the test/Test.hs
to remove all references about Coconut
. Create a test/HolyProject/StringUtils/Test.hs
file containing:
module HolyProject.StringUtils.Test
( stringUtilsSuite
) where
import Test.Tasty (testGroup, TestTree)
import Test.Tasty.SmallCheck (forAll)
import qualified Test.Tasty.SmallCheck as SC
import qualified Test.Tasty.QuickCheck as QC
import Test.SmallCheck.Series (Serial)
import HolyProject.StringUtils
stringUtilsSuite :: TestTree
stringUtilsSuite = testGroup "StringUtils"
[ SC.testProperty "SC projectNameFromString idempotent" $
idempotent projectNameFromString
, SC.testProperty "SC capitalize idempotent" $
deeperIdempotent capitalize
, QC.testProperty "QC projectNameFromString idempotent" $
idempotent capitalize
]
idempotent f = \s -> f s == f (f s)
deeperIdempotent :: (Eq a, Show a, Serial m a) => (a -> a) -> SC.Property m
deeperIdempotent f = forAll $ SC.changeDepth1 (+1) $ \s -> f s == f (f s)
The result is here:
All Tests StringUtils SC projectNameFromString idempotent: OK 206 tests completed SC capitalize idempotent: OK 1237 tests completed QC projectNameFromString idempotent: FAIL *** Failed! Falsifiable (after 19 tests and 5 shrinks): "a a" Use --quickcheck-replay '18 913813783 2147483380' to reproduce. GithubAPI Yann: OK Jasper: OK 1 out of 5 tests failed
The test fail, but this is not an error. Our capitalize
function shouldn’t be idempotent. I simply added this test to show what occurs when a test fail. If you want to look more closely to the error you could do this:
$ ./interact
GHCi, version 7.6.2: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> :l src/HolyProject/StringUtils
[1 of 1] Compiling HolyProject.StringUtils ( src/HolyProject/StringUtils.hs, interpreted )
Ok, modules loaded: HolyProject.StringUtils.
*HolyProject.StringUtils> capitalize "a a"
"AA"
*HolyProject.StringUtils> capitalize (capitalize "a a")
"Aa"
*HolyProject.StringUtils>
It is important to use ./interact
instead of ghci
. Because we need to tell ghci
how to found the package installed.
Apparently, SmallCheck didn’t found any counter example. I don’t know how it generates Strings and using deeper search is really long.
Congratulation!
Now you could start programming in Haskell and publish your own cabal package.
For example, you have to install the test libraries manually to use cabal test
.↩
There is no easy way to do something like name=$(ask name)
. Simply because $(ask name)
run in another process which doesn’t get access to the standard input↩
Having a good level of power in templates is very difficult. imho Mustache has made the best compromise.↩
tlpl: Une introduction rapide à Parsec. Un parser en Haskell.
tlpl: Comment déterminer de la façon la plus rationnelle possible le meilleur framework work relativement à vos besoins. Cliquez ici pour aller au résultats. Cet article n’est disponible qu’en anglais.
This is it.
You’ve got the next big idea.
You just need to make a very simple web application.
It sounds easy! You just need to choose a good modern web framework, when suddenly:
After your brain stack overflowed, you decide to use a very simple methodology. Answer two questions:
Which language am I familiar with?
What is the most popular web framework for this language?
Great! This is it.
But, you continually hear this little voice.
“You didn’t made a bad choice, yes. But …
you hadn’t made the best either.”
This article try to determine in the most objective and rational way the best(s) web framework(s) depending on your needs. To reach this goal, I will provide a decision tool in the result section.
I will use the following methodology:
Methodology
☞ Important Note
I am far from happy to the actual result. There are a lot of biases, for example in the choice of the parameters. The same can be said about the data I gathered. I am using very imprecise informations. But, as far as I know, this is the only article which use many different parameters to help you choose a web framework.This is why I made a very flexible decision tool:
Here are the important features (properties/parameters) I selected to make the choice:
Each feature is quite important and mostly independant from each other. I tried to embrace most important topics concerning web frameworks with these four properties. I am fully concious some people might lack another important feature. Nonetheless the methodology used here can be easily replicated. If you lack an important property add it at will and use this choice method.
Also each feature is very hard to measure with precision. This is why we will only focus on order of magnitude.
For each property a framework could have one of the six possible values: Excellent, Very Good, Good, Medium, Bad or Very Bad
So how to make a decision model from these informations?
One of the most versatile method is to give a weight for each cluster value. And to select the framework maximizing this score:
score(framework) = efficiency + robustness + expressiveness + popularity
For example:
Expressiveness 10 7 1 -∞ -∞ -∞ Popularity 5 5 4 3 2 1 Efficiency 10 8 6 4 2 1 Robustness 10 8 6 4 2 1 Using this weighted table, that means:
- we discard the three least expressive clusters.
- We don’t make any difference between excellent and very good in popularity.
- Concerning efficient framework in excellent cluster will have 2 more points than the “very good” cluster.
So for each framework we compute its score relatively to a weighted table. And we select the best(s).
Example: Using this hypothetic framework and the preceeding table.
Expressiveness Popularity Efficiency Robustness yog Excellent Very Bad Medium Very Good score(yog) = 10 + 0 + 4 + 8 = 22
Most needs should be expressed by such a weighted table. In the result section, we will discuss this further.
It is now time to try to get these measures.
None of the four properties I choosen can be measured with perfect precision. But we could get the order of magnitude for each.
I tried to focus on the framework only. But it is often easier to start by studying the language first.
For example, I have datas about popularity by language and I also have different datas concerning popularity by framework. Even if I use only the framework focused datas in my final decision model, it seemed important to me to discuss about the datas for the languages. The goal is to provide a tool to help decision not to give a decision for you.
RedMonk Programming Language Rankings (January 2013) provide an apparent good measure of popularity. While not perfect the current measure feel mostly right. They create an image using stack overflow and github data. Vertical correspond to the number of questions on stackoverflow. Horizontal correspond to the number of projects on github.
If you look at the image, your eye can see about four clusters. The 1st cluster correspond to mainstream languages:
Most developer know at least one of these language.
The second cluster is quite bigger. It seems to correspond to languages with a solid community behind them.
I don’t get into detail, but you could also see third and fourth tier popular languages.
So:
Mainstream: JavaScript, Java, PHP, Python, Ruby, C#
, C++
, C
, Objective-C, Perl, Shell
Good: Scala, Haskell, Visual Basic, Assembly, R, Matlab, ASP, ActionScript, Coffeescript, Groovy, Clojure, Lua, Prolog
Medium: Erlang, Go, Delphi, D, Racket, Scheme, ColdFusion, F#, FORTRAN, Arduino, Tcl, Ocaml
Bad: third tier Very Bad: fourth tier
I don’t thing I could find easily web frameworks for third or fourth tier languages.
For now, I only talked about language popularity. But what about framework popularity? I made a test using number of question on stackoverflow only. Then by dividing by two for each 6 cluster:
Cluster | Language | Framework | #nb | % |
---|---|---|---|---|
Excellent | Ruby | Rails | 176208 | 100% |
Very Good | Python | Django | 57385 | <50% |
Java | Servlet | 54139 | ||
Java | Spring | 31641 | ||
Node.js | node.js | 27243 | ||
PHP | Codeigniter | 21503 | ||
Groovy | Grails | 20222 | ||
Good | Ruby | Sinatra | 8631 | <25% |
Python | Flask | 7062 | ||
PHP | Laravel | 6982 | ||
PHP | Kohana | 5959 | ||
Node.js | Express | 5009 | ||
Medium | PHP | Cake | 4554 | <13% |
C♯ | ServiceStack | 3838 | ||
Scala | Play | 3823 | ||
Java | Wicket | 3819 | ||
Dart | Dart | 3753 | ||
PHP | Slim | 3361 | ||
Python | Tornado | 3321 | ||
Scala | Lift | 2844 | ||
Go | Go | 2689 | ||
Bad | Java | Tapestry | 1197 | <6% |
C♯ | aspnet | 1000 | ||
Haskell | Yesod | 889 | ||
PHP | Silex | 750 | ||
PHP | Lithium | 732 | ||
C♯ | nancy | 705 | ||
Very bad | Java | Grizzly | 622 | <3% |
Erlang | Cowboy | 568 | ||
Perl | Dancer | 496 | ||
PHP | Symphony2 | 491 | ||
Go | Revel | 459 | ||
Clojure | Compojure | 391 | ||
Perl | Mojolicious | 376 | ||
Scala | Scalatra | 349 | ||
Scala | Finagle | 336 | ||
PHP | Phalcon | 299 | ||
js | Ringo | 299 | ||
Java | Gemini | 276 | ||
Haskell | Snap | 263 | ||
Perl | Plack | 257 | ||
Erlang | Elli | 230 | ||
Java | Dropwizard | 188 | ||
PHP | Yaf | 146 | ||
Java | Play1 | 133 | ||
Node.js | Hapi | 131 | ||
Java | Vertx | 60 | ||
Scala | Unfiltered | 42 | ||
C | onion | 18 | ||
Clojure | http-kit | 17 | ||
Perl | Kelp | 16 | ||
PHP | Micromvc | 13 | ||
Lua | Openresty | 8 | ||
C++ | cpoll-cppsp | 5 | ||
Clojure | Luminus | 3 | ||
PHP | Phreeze | 1 |
As we can see, our framework popularity indicator can be quite different from its language popularity. For now I didn’t found a nice way to merge the results from RedMonk with these one. So I’ll use these unperfect one. Hopefully the order of magninute is mostly correct for most framework.
Another objective measure is efficiency. We all know benchmarks are all flawed. But they are the only indicators concerning efficiency we have.
I used the benchmark from benchmarksgame. Mainly, there are five clusters:
1x→2x | C , C++ |
2x→3x | Java 7, Scala, OCamL, Haskell, Go, Common LISP |
3x→10x | C♯, Clojure, Racket, Dart |
10x→30x | Erlang |
30x→ | PHP, Python, Perl, Ruby, JRuby |
Remarks concerning some very slow languages:
This is a first approach. The speed of the language for basic benchmarks. But, here we are interrested in web programming. Fortunately techempower has made some tests focused on most web frameworks:
These benchmark doesn’t fit well with our needs. The values are certainly quite imprecise to your real usage. The goal is just to get an order of magnitude for each framework. Another problem is the high number of informations.
As always, we should remember these informations are also imprecise. So I simply made some classes of efficiency.
Remark: I separated the clusters by using power of 2 relatively to the fastest.
Cluster | Language | Framework | #nb | slowness |
---|---|---|---|---|
Excellent | C++ | cpoll-cppsp | 114,711 | 1× |
Jav | gemini | 105,204 | ||
Lua | openresty | 93,882 | ||
Jav | servlet | 90,580 | ||
C++ | cpoll-pool | 89,167 | ||
Go | go | 76,024 | ||
Sca | finagle | 68,413 | ||
Go | revel | 66,990 | ||
Jav | rest-express | 63,209 | ||
Very Good | Jav | wicket | 48,772 | >2× |
Sca | scalatra | 48,594 | ||
Clj | http-kit | 42,703 | ||
Jav | spring | 36,643 | >3× | |
PHP | php | 36,605 | ||
Jav | tapestry | 35,032 | ||
Clj | compojure | 32,088 | ||
JS | ringo | 31,962 | ||
Jav | dropwizard | 31,514 | ||
Clj | luminus | 30,672 | ||
Good | Sca | play-slick | 29,950 | >4× |
Sca | unfiltered | 29,782 | ||
Erl | elli | 28,862 | ||
Jav | vertx | 28,075 | ||
JS | nodejs | 27,598 | ||
Erl | cowboy | 24,669 | ||
C | onion | 23,649 | ||
Hkl | yesod | 23,304 | ||
JS | express | 22,856 | >5× | |
Sca | play-scala | 22,372 | ||
Jav g | rizzly-jersey | 20,550 | ||
Py | tornado | 20,372 | >6× | |
PHP | phalcon | 18,481 | ||
Grv | grails | 18,467 | ||
Prl | plack | 16,647 | >7× | |
PHP | yaf | 14,388 | ||
Medium | JS | hapi | 11,235 | >10× |
Jav | play1 | 9,979 | ||
Hkl | snap | 9,196 | ||
Prl | kelp | 8,250 | ||
Py | flask | 8,167 | ||
Jav | play-java | 7,905 | ||
Jav p | lay-java-jpa | 7,846 | ||
PHP | micromvc | 7,387 | ||
Prl | dancer | 5,040 | >20× | |
Prl | mojolicious | 4,371 | ||
JS | ringo-conv | 4,249 | ||
Py | django | 4,026 | ||
PHP | codeigniter | 3,809 | >30× | |
Bad | Rby | rails | 3,445 | |
Sca | lift | 3,311 | ||
PHP | slim | 3,112 | ||
PHP | kohana | 2,378 | >40× | |
PHP | silex | 2,364 | ||
Very Bad | PHP | laravel | 1,639 | >60× |
PHP | phreeze | 1,410 | ||
PHP | lithium | 1,410 | ||
PHP | fuel | 1,410 | ||
PHP | cake | 1,287 | >80× | |
PHP | symfony2 | 879 | >100× | |
C# | aspnet-mvc | 871 | ||
Rby | sinatra | 561 | >200× | |
C# | servicestack | 51 | ||
Dar | dart | 0 | ||
C# | nancy | 0 | ||
Prl | web-simple | 0 |
These are manually made clusters. But you get the idea. Certainly, some framework could jump between two different clusters. So this is something to remember. But as always, the order of magnitude is certainly mostly right.
Now, how to objectively measure expressiveness?
RedMonk had a very good idea to find an objective (while imprecise) measure of each language expressiveness. Read this article for details.
After filtering languages suitable for web development, we end up with some clusters:
Cluster | Languages |
---|---|
Excellent | Coffeescript, Clojure, Haskell |
Very Good | Racket, Groovy, R, Scala, OCamL, F♯, Erlang, Lisp, Go |
Medium | Perl, Python, Objective-C, Scheme, Tcl, Ruby |
Bad | Lua, Fortran (free-format), PHP, Java, C++, C♯ |
Very Bad | Assembly, C, Javascript, |
Unfortunately there is no information about dart. So I simply give a very fast look at the syntax. As it looked a lot like javascript and js is quite low. I decided to put it close to java.
Also an important remark, javascript score very badly here while coffeescript (compiling to js) score “excellent”. So if you intend to use a javascript framework but only with coffescript that should change substantially the score. As I don’t believe it is the standard. Javascript oriented framework score very badly regarding expressiveness.
Cluster | Language | Framework |
---|---|---|
Excellent | Clj | luminus |
Clj | http-kit | |
Clj | compojure | |
Hkl | snap | |
Hkl | yesod | |
Very Good | Erl | elli |
Erl | cowboy | |
Go | go | |
Go | revel | |
Grv | grails | |
Sca | lift | |
Sca | finagle | |
Sca | scalatra | |
Sca | play-scala | |
Sca | play-slick | |
Sca | unfiltered | |
Medium | Prl | kelp |
Prl | plack | |
Prl | dancer | |
Prl | web-simple | |
Prl | mojolicious | |
Py | flask | |
Py | django | |
Py | tornado | |
Rby | rails | |
Rby | sinatra | |
Bad | C# | nancy |
C# | aspnet-mvc | |
C# | servicestack | |
C++ | cpoll-pool | |
C++ | cpoll-cppsp | |
Dar | dart | |
Jav | play1 | |
Jav | vertx | |
Jav | gemini | |
Jav | spring | |
Jav | wicket | |
Jav | servlet | |
Jav | tapestry | |
Jav | play-java | |
Jav | dropwizard | |
Jav | rest-express | |
Jav | play-java-jpa | |
Jav | grizzly-jersey | |
Lua | openresty | |
PHP | php | |
PHP | yaf | |
PHP | cake | |
PHP | fuel | |
PHP | slim | |
PHP | silex | |
PHP | kohana | |
PHP | laravel | |
PHP | lithium | |
PHP | phalcon | |
PHP | phreeze | |
PHP | micromvc | |
PHP | symfony2 | |
PHP | codeigniter | |
Very Bad | C | onion |
JS | hapi | |
JS | ringo | |
JS | nodejs | |
JS | express | |
JS | ringo-conv |
I couldn’t find any complete study to give the number of bug relatively to each framework/language.
But one thing I saw from experience is the more powerful the type system the safest your application is. While the type system doesn’t remove completely the need to test your application a very good type system tend to remove complete classes of bug.
Typically, not using pointer help to reduce the number of bugs due to bad references. Also, using a garbage collector, reduce greatly the probability to access unallocated space.
From my point of view, robustness is mostly identical to safety.
Here are the clusters:
Excellent | Haskell, Scheme, Erlang |
Very Good | Scala, Java, Clojure |
Good | Ruby, Python, Groovy, javascript, PHP |
Medium | C++, C#, Perl, Objective-C, Go, C |
So applying this to frameworks gives the following clusters:
Cluster | Language | Framework |
---|---|---|
Excellent | Erl | elli |
Erl | cowboy | |
Hkl | snap | |
Hkl | yesod | |
Very Good | Clj | luminus |
Clj | http-kit | |
Clj | compojure | |
Jav | play1 | |
Jav | vertx | |
Jav | gemini | |
Jav | spring | |
Jav | wicket | |
Jav | servlet | |
Jav | tapestry | |
Jav | play-java | |
Jav | dropwizard | |
Jav | rest-express | |
Jav | play-java-jpa | |
Jav | grizzly-jersey | |
Sca | lift | |
Sca | finagle | |
Sca | scalatra | |
Sca | play-scala | |
Sca | play-slick | |
Sca | unfiltered | |
Good | Grv | grails |
JS | hapi | |
JS | ringo | |
JS | nodejs | |
JS | express | |
JS | ringo-conv | |
Lua | openresty | |
PHP | php | |
PHP | yaf | |
PHP | cake | |
PHP | fuel | |
PHP | slim | |
PHP | silex | |
PHP | kohana | |
PHP | laravel | |
PHP | lithium | |
PHP | phalcon | |
PHP | phreeze | |
PHP | micromvc | |
PHP | symfony2 | |
PHP | codeigniter | |
Py | flask | |
Py | django | |
Py | tornado | |
Rby | rails | |
Rby | sinatra | |
Medium | C | onion |
C# | nancy | |
C# | aspnet-mvc | |
C# | servicestack | |
C++ | cpoll-pool | |
C++ | cpoll-cppsp | |
Dar | dart | |
Go | go | |
Go | revel | |
Prl | kelp | |
Prl | plack | |
Prl | dancer | |
Prl | web-simple | |
Prl | mojolicious |
For the result I initialized the table with my own needs.
And I am quite happy it confirms my current choice. I sware I didn’t given yesod any bonus point. I tried to be the most objective and factual as possible.
Now, it is up to you to enter your preferences.
On each line you could change how important a feature is for you. From essential to unsignificant. Of course you could change the matrix at will.
I just show a top 10 frameworks. In order to give a more understandable measure I provide the log of the score.
Excellent | Very good | Good | Medium | Bad | Very bad | Importance | |
---|---|---|---|---|---|---|---|
Expressiveness | |||||||
Popularity | |||||||
Efficiency | |||||||
Robustness |
I didn’t had the courage in explaining in what the scoring system is good. Mostly, if you use product instead of sum for the score you could use power of e for the values in the matrix. And you could see the matrix as a probability matrix (each line sum to 1). Which provide a slighly better intuition on whats going on.
Remember only that values are exponential. Do not double an already big value for example the effect would be extreme.
All of this is based as most as I could on objective data. The choice method seems both rather rational and classical. It is now up to you to edit the score matrix to set your needs.
I know that in the current state there are many flaws. But it is a first system to help make a choice rationally.
I encourage you to go further if you are not satisfied by my method.
The source code for the matrix shouldn’t be too hard to read. Just read the source of this webpage. You could change the positionning of some frameworks if you believe I made some mistake by placing them in some bad clusters.
So I hope this tool will help you in making your life easier.
]]>tlpl: Comment j’utilise hakyll. Abréviations, corrections typographiques, multi-language, utilisation d’index.html
, etc…
Ce site web est fait avec Hakyll.
Hakyll peut être vu comme un cms minimaliste. D’une façon plus générale, il s’agit d’une bibliothèque qui facilite la création automatique de fichiers.
D’un point de vue utilisateur voici comment j’écris mes articles :
Un titre de page
================
Un titre de chapitre
--------------------
Azur, nos bêtes sont bondées d'un cri.
Je m'éveille songeant au fruit noir de l'anibe dans sa cupule
véruqueuse et tronquée.
Saint John Perse.
### Titre 3
> C'est un blockquote.
>
> C'est un second paragraphe dans le blockquote
>
> ## C'est un H2 dans un blockquote
git push
. Mon blog est hébergé sur github.A ne pas y regarder de trop près, on peut réduire le rôle d’Hakyll à :
Créer (resp. mettre à jour) un fichier html lorsque je crée (resp. modifie) un fichier markdown.
Bien que cela semble facile, il y a de nombreux détails cachés :
Le travail d’Hakyll est de vous aider avec tout ça. Commençons par expliquer les concepts basiques.
Pour chaque fichier que vous créer, il faut fournir :
Commençons par le cas le plus simple ; les fichiers statiques (images, fontes, etc…) Généralement, vous avec un répertoire source (ici le répertoire courant) et une répertoire destination _site
.
Le code Hakyll est :
-- pour chaque fichier dans le répertoire static
match "static/*" do
-- on ne change pas le nom ni le répertoire
route idRoute
-- on ne modifie pas le contenu
compile copyFileCompiler
Ce programme va copier static/foo.jpg
dans _site/static/foo.jpg
. C’est un peu lourd pour un simple cp
. Maintenant comment faire pour transformer automatiquement un fichier markdown dans le bon html?
-- pour chaque fichier avec un extension md
match "posts/*.md" do
-- changer son extension en html
route $ setExtension "html"
-- utiliser la librairie pandoc pour compiler le markdown en html
compile $ pandocCompiler
Si vous créez un fichier posts/toto.md
, cela créera un fichier _site/posts/toto.html
.
Si le fichier posts/foo.md
contient
le fichier _site/posts/foo.html
, contiendra
Mais horreur ! _site/posts/cthulhu.html
n’est pas un html complet. Il ne possède ni header, ni footer, etc… C’est ici que nous utilisons des templates. J’ajoute une nouvelle directive dans le bloc “compile”.
match "posts/*.md" do
route $ setExtension "html"
compile $ pandocCompiler
-- use the template with the current content
>>= loadAndApplyTemplate "templates/post.html" defaultContext
Maintenant si templates/posts.html
contient:
Maintenant notre ctuhlhu.html
contient
<html>
<head>
<title>How could I get the title?</title>
</head>
<body>
<h1>Cthulhu</h1>
<p>ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn</p>
</body>
</html>
C’est facile. Mais il reste un problème à résoudre. Comment pouvons-nous changer le titre ? Ou par exemple, ajouter des mots clés ?
La solution est d’utiliser les Context
s. Pour cela, nous devrons ajouter des metadonnées à notre markdown1.
Et modifier légèrement notre template :
Super facile!
La suite de l’article est en Anglais. Je la traduirai volontier si suffisamment de personnes me le demande gentillement.
Now that we understand the basic functionality. How to:
That’s easy. Simply call the executable using unixFilter
. Of course you’ll have to install SASS (gem install sass
). And we also use compressCss to gain some space.
match "css/*" $ do
route $ setExtension "css"
compile $ getResourceString >>=
withItemBody (unixFilter "sass" ["--trace"]) >>=
return . fmap compressCss
In order to help to reference your website on the web, it is nice to add some keywords as meta datas to your html page.
In order to add keywords, we could not directly use the markdown metadatas. Because, without any, there should be any meta tag in the html.
An easy answer is to create a Context
that will contains the meta tag.
-- metaKeywordContext will return a Context containing a String
metaKeywordContext :: Context String
-- can be reached using $metaKeywords$ in the templates
-- Use the current item (markdown file)
metaKeywordContext = field "metaKeywords" $ \item -> do
-- tags contains the content of the "tags" metadata
-- inside the item (understand the source)
tags <- getMetadataField (itemIdentifier item) "tags"
-- if tags is empty return an empty string
-- in the other case return
-- <meta name="keywords" content="$tags$">
return $ maybe "" showMetaTags tags
where
showMetaTags t = "<meta name=\"keywords\" content=\""
++ t ++ "\">\n"
Then we pass this Context
to the loadAndApplyTemplate
function:
match "posts/*.md" do
route $ setExtension "html"
compile $ pandocCompiler
-- use the template with the current content
>>= loadAndApplyTemplate "templates/post.html"
(defaultContext <> metaKeywordContext)
☞ Here are the imports I use for this tutorial.
{-# LANGUAGE OverloadedStrings #-} import Control.Monad (forM,forM_) import Data.List (sortBy,isInfixOf) import Data.Monoid ((<>),mconcat) import Data.Ord (comparing) import Hakyll import System.Locale (defaultTimeLocale) import System.FilePath.Posix (takeBaseName,takeDirectory ,(</>),splitFileName)
What I mean is to use url of the form:
http://domain.name/post/title-of-the-post/
I prefer this than having to add file with .html
extension. We have to change the default Hakyll route behavior. We create another function niceRoute
.
-- replace a foo/bar.md by foo/bar/index.html
-- this way the url looks like: foo/bar in most browsers
niceRoute :: Routes
niceRoute = customRoute createIndexRoute
where
createIndexRoute ident =
takeDirectory p </> takeBaseName p </> "index.html"
where p=toFilePath ident
Not too difficult. But! There might be a problem. What if there is a foo/index.html
link instead of a clean foo/
in some content?
Very simple, we simply remove all /index.html
to all our links.
-- replace url of the form foo/bar/index.html by foo/bar
removeIndexHtml :: Item String -> Compiler (Item String)
removeIndexHtml item = return $ fmap (withUrls removeIndexStr) item
where
removeIndexStr :: String -> String
removeIndexStr url = case splitFileName url of
(dir, "index.html") | isLocal dir -> dir
_ -> url
where isLocal uri = not (isInfixOf "://" uri)
And we apply this filter at the end of our compilation
match "posts/*.md" do
route $ niceRoute
compile $ pandocCompiler
-- use the template with the current content
>>= loadAndApplyTemplate "templates/post.html" defaultContext
>>= removeIndexHtml
Creating an archive start to be difficult. There is an example in the default Hakyll example. Unfortunately, it assumes all posts prefix their name with a date like in 2013-03-20-My-New-Post.md
.
I migrated from an older blog and didn’t want to change my url. Also I prefer not to use any filename convention. Therefore, I add the date information in the metadata published
. And the solution is here:
match "archive.md" $ do
route $ niceRoute
compile $ do
body <- getResourceBody
return $ renderPandoc body
>>= loadAndApplyTemplate "templates/archive.html" archiveCtx
>>= loadAndApplyTemplate "templates/base.html" defaultContext
>>= removeIndexHtml
Where templates/archive.html
contains
And base.html
is a standard template (simpler than post.html
).
archiveCtx
provide a context containing an html representation of a list of posts in the metadata named posts
. It will be used in the templates/archive.html
file with $posts$
.
postList
returns an html representation of a list of posts given an Item sort function. The representation will apply a minimal template on all posts. Then it concatenate all the results. The template is post-item.html
:
Here is how it is done:
postList :: [Item String] -> Compiler [Item String]
-> Compiler String
postList sortFilter = do
-- sorted posts
posts <- loadAll "post/*" >>= sortFilter
itemTpl <- loadBody "templates/post-item.html"
-- we apply the template to all post
-- and we concatenate the result.
-- list is a string
list <- applyTemplateList itemTpl defaultContext posts
return list
createdFirst
sort a list of item and put it inside Compiler
context. We need to be in the Compiler
context to access metadatas.
createdFirst :: [Item String] -> Compiler [Item String]
createdFirst items = do
-- itemsWithTime is a list of couple (date,item)
itemsWithTime <- forM items $ \item -> do
-- getItemUTC will look for the metadata "published" or "date"
-- then it will try to get the date from some standard formats
utc <- getItemUTC defaultTimeLocale $ itemIdentifier item
return (utc,item)
-- we return a sorted item list
return $ map snd $ reverse $ sortBy (comparing fst) itemsWithTime
It wasn’t so easy. But it works pretty well.
To create an rss feed, we have to:
We could then render the posts twice. One for html rendering and another time for rss. Remark we need to generate the rss version to create the html one.
One of the great feature of Hakyll is to be able to save snapshots. Here is how:
match "posts/*.md" do
route $ setExtension "html"
compile $ pandocCompiler
-- save a snapshot to be used later in rss generation
>>= saveSnapshot "content"
>>= loadAndApplyTemplate "templates/post.html" defaultContext
Now for each post there is a snapshot named “content” associated. The snapshots are created before applying a template and after applying pandoc. Furthermore feed don’t need a source markdown file. Then we create a new file from no one. Instead of using match
, we use create
:
create ["feed.xml"] $ do
route idRoute
compile $ do
-- load all "content" snapshots of all posts
loadAllSnapshots "posts/*" "content"
-- take the latest 10
>>= (fmap (take 10)) . createdFirst
-- renderAntom feed using some configuration
>>= renderAtom feedConfiguration feedCtx
where
feedCtx :: Context String
feedCtx = defaultContext <>
-- $description$ will render as the post body
bodyField "description"
The feedConfiguration
contains some general informations about the feed.
feedConfiguration :: FeedConfiguration
feedConfiguration = FeedConfiguration
{ feedTitle = "Great Old Ones"
, feedDescription = "This feed provide information about Great Old Ones"
, feedAuthorName = "Abdul Alhazred"
, feedAuthorEmail = "abdul.alhazred@great-old-ones.com"
, feedRoot = "http://great-old-ones.com"
}
Great idea certainly steal from nanoc (my previous blog engine)!
As I just said, nanoc was my preceding blog engine. It is written in Ruby and as Hakyll, it is quite awesome. And one thing Ruby does more naturally than Haskell is regular expressions. I had a lot of filters in nanoc. I lost some because I don’t use them much. But I wanted to keep some. Generally, filtering the content is just a way to apply to the body a function of type String -> String
.
Also we generally want prefilters (to filter the markdown) and postfilters (to filter the html after the pandoc compilation).
Here is how I do it:
markdownPostBehavior = do
route $ niceRoute
compile $ do
body <- getResourceBody
prefilteredText <- return $ (fmap preFilters body)
return $ renderPandoc prefilteredText
>>= applyFilter postFilters
>>= saveSnapshot "content"
>>= loadAndApplyTemplate "templates/post.html" yContext
>>= loadAndApplyTemplate "templates/boilerplate.html" yContext
>>= relativizeUrls
>>= removeIndexHtml
Where
applyFilter strfilter str = return $ (fmap $ strfilter) str
preFilters :: String -> String
postFilters :: String -> String
Now we have a simple way to filter the content. Let’s augment the markdown ability.
Comparing to LaTeX, a very annoying markdown limitation is the lack of abbreviations.
Fortunately we can filter our content. And here is the filter I use:
abbreviationFilter :: String -> String
abbreviationFilter = replaceAll "%[a-zA-Z0-9_]*" newnaming
where
newnaming matched = case M.lookup (tail matched) abbreviations of
Nothing -> matched
Just v -> v
abbreviations :: Map String String
abbreviations = M.fromList
[ ("html", "<span class=\"sc\">html</span>")
, ("css", "<span class=\"sc\">css</span>")
, ("svg", "<span class=\"sc\">svg</span>")
, ("xml", "<span class=\"sc\">xml</span>")
, ("xslt", "<span class=\"sc\">xslt</span>") ]
It will search for all string starting by ‘%’ and it will search in the Map
if there is a corresponding abbreviation. If there is one, we replace the content. Otherwise we do nothing.
Do you really believe I type
each time I write LaTeX?
Generally I write my post in English and French. And this is more difficult than it appears. For example, I need to filter the language in order to get the right list of posts. I also use some words in the templates and I want them to be translated.
First I create a Map containing all translations.
data Trad = Trad { frTrad :: String, enTrad :: String }
trads :: Map String Trad
trads = M.fromList $ map toTrad [
("changeLanguage",
("English"
, "Français"))
,("switchCss",
("Changer de theme"
,"Change Theme"))
,("socialPrivacy",
("Ces liens sociaux préservent votre vie privée"
,"These social sharing links preserve your privacy"))
]
where
toTrad (key,(french,english)) =
(key, Trad { frTrad = french , enTrad = english })
Then I create a context for all key:
tradsContext :: Context a
tradsContext = mconcat (map addTrad (M.keys trads))
where
addTrad :: String -> Context a
addTrad name =
field name $ \item -> do
lang <- itemLang item
case M.lookup name trads of
Just (Trad lmap) -> case M.lookup (L lang) lmap of
Just tr -> return tr
Nothing -> return ("NO TRANSLATION FOR " ++ name)
Nothing -> return ("NO TRANSLATION FOR " ++ name)
The full code is here. And except from the main file, I use literate Haskell. This way the code should be easier to understand.
If you want to know why I switched from nanoc:
My preceding nanoc website was a bit too messy. So much in fact, that the dependency system recompiled the entire website for any change.
So I had to do something about it. I had two choices:
I added too much functionalities in my nanoc system. Starting from scratch (almost) remove efficiently a lot of unused crap.
So far I am very happy with the switch. A complete build is about 4x faster. I didn’t broke the dependency system this time. As soon as I modify and save the markdown source, I can reload the page in the browser.
I removed a lot of feature thought. Some of them will be difficult to achieve with Hakyll. A typical example:
In nanoc I could take a file like this as source:
And it will create a file foo.hs
which could then be downloaded.
<h1>Title</h1>
<p>content</p>
<a href="code/foo.hs">Download foo.hs</a>
<pre><code>main = putStrLn "Cthulhu!"</code></pre>
Nous pouvons aussi ajouter ces métadonnées dans un fichier externe (toto.md.metadata
).↩
tlpl: Les boutons des réseaux sociaux traquent vos utilisateurs, ont un design incohérent avec celui de votre site, utilisent des ressources, ralentissent le rendu de vos pages.
Faite les choses bien. Utilisez des liens statiques.
Si vous n’avez pas envie de lire, copiez et collez simplement le code suivant dans votre html :
<div id="sociallinks">
<a href="https://twitter.com/home?status=$url$"
target="_blank" rel="noopener noreferrer nofollow"
>Tweet this</a> -
<a href="http://www.facebook.com/sharer/sharer.php?u=$url$"
target="_blank" rel="noopener noreferrer nofollow"
>Like this</a> -
<a href="https://plus.google.com/share?url=$url$"
target="_blank" rel="noopener noreferrer nofollow"
>Share on G+</a>
</div>
<script>
(function(){window.addEventListener("DOMContentLoaded",function(){
var url=document.location;
var links=document.getElementById("sociallinks")
.getElementsByTagName('a');
for (var i=0;i!=links.length;i++){
links[i].setAttribute("href",
links[i].href.replace('$url$',url));}})})();
</script>
Ever been on a website and want to tweet about it? Fortunately, the website might have a button to help you. But do you really know what this button do?
The “Like”, “Tweet” and “+1” buttons will call a javascript. It will get access to your cookies. It helps the provider of the button to know who you are.
In plain English, the “+1” button will inform Google you are visiting the website, even if you don’t click on “+1”. The same is true for the “like” button for facebook and the “tweet this” button for twitter.
The problem is not only a privacy issue. In fact (sadly imho) this isn’t an issue for most people. These button consume computer ressources. Far more than a simple link. It thus slow down a bit the computer and consume energy. These button could also slow down the rendering of your web page.
Another aspect is their design. Their look and feel is mostly imposed by the provider.
The most problematic aspect in my opinion is to use a third party js on your website. What if tomorrow twitter update their tweet button? If the upgrade break something for only a minority of people, they won’t fix it. This could occur anytime without any notification. They just have to add a document.write
in their js
you call asynchronously and BAM! Your website is just an empty blank page. And as you call many external ressources, it can be very difficult to find the origin of the problem.
Using social network buttons:
I will provide you two solutions with the following properties:
Solution 1 (no js):
<a href="https://twitter.com/home?status=$url$"
target="_blank" rel="noopener noreferrer nofollow"
>Tweet this</a>
<a href="http://www.facebook.com/sharer/sharer.php?u=$url$"
target="_blank" rel="noopener noreferrer nofollow"
>Like this</a>
<a href="https://plus.google.com/share?url=$url$"
target="_blank" rel="noopener noreferrer nofollow"
>Share on G+</a>
But you have to replace $url$
by the current url.
Solution 2 (Just copy/paste):
If you don’t want to write the url yourself, you could use some minimal js:
<div id="sociallinks">
<a href="https://twitter.com/home?status=$url$"
target="_blank" rel="noopener noreferrer nofollow"
>Tweet this</a> -
<a href="http://www.facebook.com/sharer/sharer.php?u=$url$"
target="_blank" rel="noopener noreferrer nofollow"
>Like this</a> -
<a href="https://plus.google.com/share?url=$url$"
target="_blank" rel="noopener noreferrer nofollow"
>Share on G+</a>
</div>
<script>
(function(){window.addEventListener("DOMContentLoaded",function(){
var url=document.location;
var links=document.getElementById("sociallinks")
.getElementsByTagName('a');
for (var i=0;i!=links.length;i++){
links[i].setAttribute("href",
links[i].href.replace('$url$',url));}})})();
</script>
Here is the result:
If you don’t want just text but nice icons. You have many choices:
<img src="..."/>
in the links.As the first solution is pretty straightforward, I’ll explain the second one.
@font-face
font-family: 'social'
src: url('fonts/social_font.ttf') format('truetype')
font-weight: normal
font-style: normal
.social
font-family: social
Now add this to your html:
Solution 1 (without js):
<a href="https://twitter.com/home?status=$url$"
target="_blank" rel="noopener noreferrer nofollow"
class="social">t</a>
·
<a href="http://www.facebook.com/sharer/sharer.php?u=$url$"
target="_blank" rel="noopener noreferrer nofollow"
class="social">`</a>
·
<a href="https://plus.google.com/share?url=$url$"
target="_blank" rel="noopener noreferrer nofollow"
class="social">g</a>
Solution 2 (same with a bit more js):
<div id="sociallinksunicode">
<a href="https://twitter.com/home?status=$url$"
target="_blank" rel="noopener noreferrer nofollow"
class="social">t</a>
·
<a href="http://www.facebook.com/sharer/sharer.php?u=$url$"
target="_blank" rel="noopener noreferrer nofollow"
class="social">`</a>
·
<a href="https://plus.google.com/share?url=$url$"
target="_blank" rel="noopener noreferrer nofollow"
class="social">g</a>
</div>
<script>
(function(){window.addEventListener("DOMContentLoaded",function(){
var url=document.location;
var links=document.getElementById("sociallinksunicode")
.getElementsByTagName('a');
for (var i=0;i!=links.length;i++){
links[i].setAttribute("href",
links[i].href.replace('$url$',url));}})})();
</script>
Here is the result:
ps: On my personal website I continue to use Google analytics. Therefore, Google (and only Google, not facebook nor twitter) can track you here. But I might change this in the future.
]]>Yesterday I was happy to make a presentation about Category Theory at Riviera Scala Clojure Meetup (note I used only Haskell for my examples).
If you don't want to read them through an HTML presentations framework or downloading a big PDF just continue to read as a standard web page.
]]>
Tweet this - Like this - Share on G+