Okay. Today's XXI century already. You have colored commandlines everywhere
and you have basically single-user *nix OSes. Now it's time to shift from default
commandline prompt strings to something more useful (and fancy, of course).
This is how my prompt looks like now:
This is four-line prompt.
- Empty line as a separator.
- Clean cut with command history number and 72 dashes.
- User and host names, current time, jobs count and current directory.
- Emoticon showing the result of last command and the traditional symbol (which bears no meaning here, really). If we currently are in the Git repo, then between this tokens the current branch name is being shown in square brackets.
Emoticon at the last line behaves like this:
Big thanks to Make Tech Easier for inspiration.
Here how it was done...
I put the code for constructing
PS1 variable in the separate script.
To use this script, you should source the
.bash_prompt from your
file and of course the first line of
.bash_prompt should correctly source
It should be noted that true Common Lisp somewhat lacks in several important
parts of string-processing, and it shows sometime. Today I needed to heavily
process large body of regular text and will write here some functions which
are AFAIK considered "standard" in modern languages and which not so easily
accessible and/or amazingly intuitive in CL.
In all following code snippets token
input stands for input string.
Trimming string from spaces, tabs and newlines
All named characters are listed in Hyperspec, 13.1.7 Character Names.
Replacing by regular expressions
Provided by CL-PPCRE package.
In next snippet I remove all tokens enclosed in square brackets from the input string:
Honestly, I don't know when you can need simple
regex-replace-all. Also, note the double-escaping of special symbols (
Splitting string by separator symbol
Provided by CL-UTILITIES package.
In next snippet I split the input string by commas:
Making same modification on every string in given list
In next snippet I trim spaces around all strings in list
However, way better is to wrap the transformation for the string in separate function and call the mapping referencing just the name of transformation:
Do not forget that string is just a sequence of a characters, and all sequence-operating functions can work on strings in either
'(#\a #\b #\c #\d)form. This applies only to sequence-operating functions, however.
Removing the characters from string by condition
In the next snippet I leave only the alphanumeric characters in the input string:
map, you can make arbitrary complex predicates either with lambdas or wrapping them in separate functions.
Here's the canonical method of lazy loading object attribute.
The point of lazy loading is when some attribute of an object is another object, and its construction is costly. As the result, we don't want to create this another object at the time of creating the parent object. Only at the time when we really need this object.
It's not Haskell, of course, but this idiom is quite simple and more importantly, you can add lazyloading to any attribute, even already existing.
Just some loosely related thoughts about the producing software.
Regardless of whether you're Agile or doing some standard Waterfall practice, you inevitably do the following things:
- Determine what the application should do
- Write code
- Deploy the program
- Check that the program does what it should do.
These areas are absolutely necessary.
Even if you think you skip the fourth step, the end user of your application will do it anyway. Because to check that the program does what it should do you obviously have to make it do that.
What we're working with when developing the software application is codebase.
This codebase is expected to be run by some runtime.
Runtime is a general concept here: it can be either software, in case of Java VM, or hardware, in case of compiled C++ code.
When runtime executes the codebase, it can access other programs ("applications") or some OS facilities like file system or network sockets or I/O channels for human/machine interaction.
Together, all this 3rd party stuff is the environment inside which the codebase is being executed, runtime being the part of environment.
Bare codebase cannot be executed in any arbitrary environment.
- Is written for some specific environment and completely useless for other.
- Has parts by changing which we can accommodate to different environments.
These parts we call config.
Act of changing config we call configuration.
If config is not explicitly defined as constants or some sort of the structured file holding such a constants, it doesn't really matter, as the seams for changes required to migrate between environments are still there, albeit hidden.
We usually have the following types of configs:
- Connection strings for reaching some services
- Credentials for getting access to some services
- Properties which needed to accommodate to some restrictions or specifics of environment.
Final software product is a codebase after configuration being executed by a runtime inside an environment.
Database contents are environment not code.
Migrations are a config.
Modes of execution
Only two modes of execution of a product are really meaningful:
- Production mode
- Debug mode
We prefer to use the product in production mode.
We also prefer to test the product in production mode.
We prefer to get diagnostics in debug mode.
We also prefer to actually develop the product in debug mode.
It perfectly aligns with, for example, Debian tradition of packaging software into "just" packages and
-dgb packages, which have binaries with debug symbols being not stripped out.
It also has
-dev packages and
-src packages, but that's not important, as they are on different levels of abstraction from our current standpoint.
No reason to invent other modes of execution.
Production mode is characterized by optimisation for speed of execution and security for both the user data and the product consistency.
Debug mode is characterized by strongest possible bias for transparency in all actions done by the application. All logging, assertion, error messaging facilities available should be open. If it's applicable, application should even unpack it's own source code in this mode.
Quite easy to see that the target of all application-level tests is the product, not the codebase (and it's dumb to think about testing config).
As a side note, we usually do not test the environment specifically, as the whole purpose of the environment is that we rely on it.
What does the lowest level unit tests do?
Unit tests take small slices of the codebase and treat them as microscopic product inside the minimal environment consisting only of runtime.
This property of unit tests does two things:
- Greatly reduces the complexity of testing, as we almost never need to configure the codebase under test.
- Greatly reduce the usefulness of test in question, because we check not the feature set of the product, but the feature set of its small slice.
Goal of the testing was already mentioned before: to make sure that the product really provides value for its user.
Value for user is determined by the feature set of the product, which written listing we call specification for the product.
Given the specification, one can check whether the product really provides the features for user.
This process is called testing.
Agent performing the testing can be either human, in which case we're talking about manual testing, or some other program, in which case we're talking about automated testing.
When we're doing the manual testing, specification can be written in any language.
When we're doing the automated testing, specification should be written in a language parsable by the automated tester.
Even if we're doing the automated testing, we usually still need the textual description of the feature set of the product.
Thus, it's rational to have the specification readable by both human and the automated tester.
Here we infer the natural necessity in the languages like Gherkin or Concordion.
A codebase consists of two parts mainly:
- One which does contact with the environment
- One which does not contact with the environment
Parts of the codebase which does not contact with the environment can be called pure.
By definition you can cover by unit tests only pure part of the codebase.
Some languages force you to explicitly split your codebase to pure and environment-dependent parts, like Haskell.
Unit tests treat code as a product.
They should be written like set of examples how to use the code they are testing.
This encourages usage of domain-specific languages to reduce duplication in tests setup and teardown.
This, in turn, encourages usage of domain-specific languages in the production code, to maintain the same level of abstraction in unit tests and the general readability.
Here we infer the natural necessity in the DSL.
The deploy is an act of transferring the codebase from whatever storage it is in to the target machine, configuring it to the environment of that machine and thus making the product available there.
This term is independent of whether we are talking about compiled languages or not.
More than that, with compiled languages the compilation step is neither a deploy nor a configuration.
Compilation of a codebase is just transforming it to the form understandable by the natural runtime of the machine code: the microprocessor itself, with the OS supporting it.
We can safely skip the compilation if we can afford running the product by the runtime of the interpreter, be it JIT-compiling or line-by-line.
For compiled languages, the natural notion of deploy is the "installation" of the software into OS. The act of configuration of compiled codebase according to new environment is performed by the installer program.
For the scripting languages in the Web application development domain, there's no step of "installation" usually. We just copy the codebase verbatim to the target machine, change manually several lines in script files dedicated to holding the config and consider it done accommodating to the new environment.
Well-known problem called "it was working on my machine" raises because of ignorance of the fact that:
- There are environments other than your workstation
- Your codebase depends on the **environment
Latter being a lot more significant than the former.
The config is just the parts of your codebase, and so it's just plain text.
Therefore, it is suggested that the following installation script will suffice for any codebase out there, no matter what programming language it is in and what language it is written in.
- Take the codebase and the config as input.
- config is the listing of the commands.
- A command either tells installer to change some text token in some file from codebase
- Or tells installer to rename/copy/move some file from codebase to some other place.
Such a script does not even deserve to be called a "build system".
Each config file, holding commands, corresponds to one of environments to which the product can be installed. codebase itself will hold only placeholders and possibly "example" files holding the bunches of placeholders which needs to be moved to appropriate places.
Only codebase should be pushed to the source control system.
config files should be published by either more secure (in case of sensitive credentials) or less secure (in case of local development workstations) means.
It is suggested that there's no real need in the build commands conceptually other than the following:
Suppose we want to write the following test scenario:
Let's define this steps in our
FeatureContext. First step we can define with
the following regexp:
/^I am in the Friends section$/ because we really don’t
need the method of
FeatureContext class containing long switch enumerating
every possible section of the site.
Second step we can define with the following regexp:
/^I(?: should)? see "([^"]*)" .
in the search results$/
It should be obvious why we use the custom test step instead of using the predefined test steps and writing something like 'I should see "My Friend" in ".search-wrapper form input[role="search"]" element'.
Then, someday, sure thing, we will want to write the following scenario:
And in here, we have another “search results”, which should be found by completely
different selector and which is located on different page.
So, this is the context-dependent statement: term “search results” depends
on what “section” we mentioned previously. This is right from the linguistics.
To be able to use this natural-language feature we need to implement it somehow.
I'll use the abbrev CDTS instead of longer "context-dependent test step".
Fortunately, Behat has a feature with exactly the same purpose: subcontexts.
Unfortunately, it's not working in the way we need to use the CDTS properly.
In an ideal world, we can do this:
and this would load the
FriendsSectionContext and all CDTS definitions in
it, like the following:
useContext different context class, we get different definition for the
/^I should see "([^"]*)" in the search results$/ test step.
Unfortunately, Behat cannot load the test step definitions from subcontexts
at runtime. Apparently, it’s because it should parse the regexps in docblocks
corresponding to definitions or something like that. So, are forced to load
all our subcontexts right in our constructor.
Apart from being horribly ineffective, this prevents us from defining the test
steps having same regexp across several different separate subcontexts.
Workaround for this problem is this:
- add the property to the
FeatureContextwhich will hold the reference to current subcontext, name it like "location_context" or so,
- make the context-
setting('I am in the "..." section') test step set the "location_context" to the subcontext needed (you can get the subcontext with the call to
- move the context-dependent logic to “normal” subcontext methods, which should have the same name across all subcontexts,
- register all subcontexts with
useContextunder meaningful aliases like "friends_section", "shop_section", etc,
- define the context-dependent test step like 'I should see "..." in search results' in main
- in the definition of this step, get the context-dependent logic needed by calling the relevant method on the subcontext the "location_context" property currently points at.
So, we need our context-setting test steps to be like this:
Assuming 'friends_section' is an alias of the
it was set in the constructor, after this test step, our "location_context"
FriendsSectionContext, and, say, it's
will do exactly what we need in the "Friends" section.
Then, the context-depentent test step will be like this, getting the location-dependent
logic from the "location_context" set previously:
Main point is this: we want to check if something should appear in the "search
results" entity in some different page → we can use the same test step in our
.feature files, just explicitly name the section needed beforehand somewhere
above in the text. This will make the
.feature files a lot more human-readable.
This concludes the explanation about how to use this linguistic technique in
Just watched the episodes 6, 7 and 8 of Clean Coders codecasts, namely, "TDD part 2", "Architecture" and "SOLID principles", and just could not stay silent about the awesomeness of this works.
First of all, "TDD part 2" has almost half an hour of peeping over the shoulder of the Uncle Bob while he does the Bowling Kata (yeah, I know that Bowling Kata is boring to death this days, because it's just like everywhere, but nevertheless) and this is just awesome: you can not only watch the proper TDD in action, but the workflow with modern auto-refactoring IDE (IntelliJ IDEA was used there). Don't know what's more inspiring to hardcore Viperized EMACS user like me.
Second, "Architecture" has the greatest and cleanest explanation of clean functional separation between different parts of modern-style application built with OO principles in mind. I even redrawn it on paper to learn it by heart, but, I suppose, I cannot reproduce it because of legal issues (judging by the licences for codecasts, Uncle Bob is pretty strict about legal issues).
And third, "SOLID principles" expands "Architecture" even more and adds **a lot **of details to the methods of designing programs cleanly. If you (like me) never worked in serious IT consulting before (say, you're a fresh graduate from some university), you'll be able to get many, many good advices for building/maintaining your next application.
I really think that this three episodes are really core to the whole series and if you cannot afford all of 14 episodes, then go buy "Clean Code" and then only this three episodes. You just don't understand what you miss, honestly.
As for me, I'm ashamed of my early projects now and want to rewrite them all from scratch badly. :(
Things in the source code I hate perhaps the most in my everyday work
There's things which one can tolerate. And there's things which just choke you down and turn your brain inside out and you suddenly jump up and start screaming and throwing things to other things.
Here's my list.
Holy crap, the MOST irritating is to see the following:
That's making me facepalm every time I see it.
I mean, what the fuck?
You are DEVELOPER, muthafucka!
You MUST see the errors while you are working, how the hell you are supposed to debug anything without error reporting?!
The only place where you should put such an embarassing shit like the above is the bootstrap script at your PRODUCTION, just ONE of your deployment environments!
Here's how you supposed to do it, you moron!
Even better with feature-detecting instead of environment-detecting and proper code style:
ALL OTHER TIME IT'S MEANINGLESS TO DISABLE ERROR REPORTING, REMEMBER IT, FUCKER!!
I see even the freaking web frameworks doing this idiotic shit and inventing some crazy nonsense like their own debugging facilities with their own flag constants.
JUST WRITE EVERYTHING YOU CAN TO ME IN CASE OF ERROR, YOU WORTHLESS PIECE OF CRAP.
Second is, of course, the following shit.
BLAM! PHP Notice about access to undefined hashmap key.
YES SONUVABITCH I WORK WITH E_STRICT ENABLED because I do care about any possible sign that I screwed somewhere.
Here's the most usual usage of this shit:
PHP Notice in that view file means the user got the whole page crashed because of a single nonexistend data item, and all we wanted to do with that data item is render it, it wasn't something mission-critical.
If you want to write possibly nonexistent data just prepend a fucking
@ that's all!.
Unnecessary strict comparison
$some_fetch_result === null crap.
Yeah, that's right! Why the fuck you want to know whether the result of fetching something from somewhere is exactly
null? What if someone who wrote that API method decided to return
false in case of empty result? Even if there's an empty array (in which case even I agree that it should not be considered same as null) what are you going to do with it, anyway?
Just forget about this brain-hurting nonsense:
Write the fucking
!$records and that's all! By doing that
=== crap you stress out that
null is of significant value having a special meaning, when it obviously isn't, you just wanted to check whether we failed to return anything.
I DON'T EVEN START TO TALK ABOUT NOT RETURNING NULL VALUES THERE, YOU MORON STILL STUCK IN 90'S.
Prefixes before object properties
OMG, words can't describe the amount of disgust I experience when seeing the piece of crap like the following:
Freaking degenerates coming to PHP from C/C++ completely missed the whole point with
m_ prefix before member variables here. Remember, you brainless ape!
MEMBER VARIABLES, DO YOU GET THEM NOW?
We can't differentiate between member and other local variables without special cues like the proper naming or FQN, so the
m_ convenience was invented. In PHP we always refer to member variables from $this object so we always know what variable is internal to class and what variable conforms to fucking Demeter's Law.
There's nothing uglier in PHP code than
$this->_shit. WTF is this
->_ noise, are we in Perl now? HATEHATEHATE
Here's how to fix the default too wide and too "round" fonts rendered in EMACS
under KDE (and maybe other DE, too).
Write this into the
After that, run the following:
And after that EMACS will really get the same-looking fonts as the other system
around it. Thank you Atragor for your answer at archlinux.org
Okay, today's the task: pushing all of 1892 files of Yii framework to the Git
repository is a burden, and upgrading it to new version pollutes the git log
with changes to files you don't care about. Let's package it into a PHAR!
Rasmus Schultz made a special script to package the Yii framework into
a PHAR archive. I forked it to save for a future (at least my GitHub account
will live as long as this blog).
You need just to put this script to the root of Yii codebase (cloned github
repo, for example), and run it as usual:
This will create the PHAR archive in the same directory.
Hovewer, beware the catch 1: PHP can refuse to create packed archive, emitting
the following error:
I decided to just remove lines 140 and 142 from the script:
And that's all. I can bear with 20 MB file in repo, and don't really care about
To connect the resulting PHAR to your Yii application, replace your usual:
With the following:
Note that in
new Phar() invocation you should use real path to your phar
archive file, but second line should be written verbatim, as the PHAR which
becomes created is being made with alias 'yii', using feature described in
the documentation for
However, of course, there's a catch 2: Yii built-in asset manager (CAssetManager)
has too specific
publish method, unable to cope with custom PHP streams.
So, we need the fixed version.
I decided to create a descendant of
CAssetManager descriptively called
with the following definition exactly:
I'm really, really sorry that you had to read this traditionally horrible Yii
code, but that was inevitable... :(
Main change was starting from the
$isPhar = strncmp('phar://', $path, 7) === part.
Now just link this asset manager instead of built-in one:
Now your Yii web application uses phar archive instead of huge pile of separate
files. They say that this increases performance, but my personal reasons was
just to reduce the number of files inside the repository.
Today's guess is this: you have a test harness which utilizes the database,
and it has enough test cases in it for full test run to be so slow you cringe
at the very thought of launching it.
We discuss a lifehack-style solution to this problem: putting the DBMS which
will be used by our test harness completely on a RAM disk, so it'll
operate from much faster memory than the hard drive (even faster than from
Main issue is this: as you probably need only a test datasets at ramdisk, and
only for a period of running test suite, you will need separate DBMS instances
to work on ramdisk, not the ones already installed on your system.
In the end you'll get the special preparation script which you should launch
before your tests. After this, your test suite will run with the isolated MySQL
and MongoDB instances on top of ramdisk.
If your test suite has large quantities of integration tests using databases,
this will greatly increase the speed of test run. It is reported in one particular
case that the drop in run time was from 1:30 to 18 seconds, 5 times faster.
You should have a *nix system as a MySQL Sandbox (see below) works only there.
OSX probably will do, too. This system should have some bash-like shell (obviously)
and Perl installed. Kernel should have
tmpfs support. Your nonprivileged
user should be able to mount filesystems, or you'll need to hack the script
sudo at mount step (assuming your user can sudo).
For isolated MySQL instance you need the MySQL distirbutive downloaded from
the website. For isolated MongoDB instance you need the MongoDB distirbutive
downloaded from the website. Note, however, that the whole MongoDB server
is contained in just a single binary file ~8MB in size.
Of course as we will work completely in memory you have to be sure that you
have enough RAM to store your (presumably test) datasets.
Making ramdisk is very simple with latest Linux:
RAMDISK_NAME is some identifier for mountpoints table.
RAMDISK_DIR is the
directory which will be turned into RAM-based filesystem.
mount action, anything you put into
RAMDISK_DIR will be placed
into memory, without interaction with the physical hard drive.
Of course, it means that after unmounting the ramdisk everything which was
in it will be lost.
Shutting down the ramdisk
Just unmount the created mount point:
Note that you probably should stop all running services which still use the
ramdisk prior to unmounting!
Isolated MySQL instance
We'll use the MySQL Sandbox project to launch isolated MySQL instances.
For it to work you need the MySQL distirbutive downloaded from the website.
MySQL Sandbox is installed with the following command:
as root, and you'll need to run it as follows:
It's a one-liner split to two lines for readability.
Note that you need root privileges only to install the MySQL Sandbox application
itself, all further communication with it will be done from unprivileged account,
most possibly the same under which you launch the test suite.
We need to set the
SANDBOX_HOME variable prior to launching the sandbox factory
because that's how we control where it'll put the sandboxed MySQL instance.
By default it'll use
$HOME/sandboxes, which is probably not what you need.
RAMDISK_DIR is the same directory that the one we prepared in previous
MYSQL_PACKAGE is a full path to the MySQL distirbutive package downloaded
from website. Please note that MySQL Sandbox will unpack it to the same directory
and will essentially use this unpacked contents to launch the sandboxed MySQL.
So, probably, you'll need to move the package to ramdisk, too, to increase
performance of actually launching and running the MySQL server itself, however,
note that unpacked 5.6.0 contents are 1GB in size.
MYSQL_PORT_DESIRED value you use here, because you'll need to
use it to configure your test suite to point at correct MySQL instance.
MYSQL_DIRNAME is of least importance here, because it's just a name of a
subfolder under the
SANDBOX_HOME in which this particular sandbox will be
make_sandbox ended it's routine you can check that your sandbox is
indeed working by running:
Connection to Isolated MySQL Instance
You should use the following credentials to connect to sandboxed MySQL:
- host : '127.0.0.1'
- port :
- username : 'msandbox'
- password : 'msandbox'
Please note that you must use
127.0.0.1 value for host and not a
as usual, because of sandbox internal security configuration.
Shutting Down the Isolated MySQL Instance
To shutdown the sandboxed MySQL, issue the following command:
or more forceful
This commands are needed mostly to stop the working daemon; after the unmounting
of ramdisk all of sandbox data will be purged out of existence.
Isolated MongoDB instance
MongoDB server is contained in just a single binary file so it'll be a lot
more easier compared to MySQL.
You'll need the MongoDB distirbutive downloaded from the website, too.
This time unpack it to some directory.
After that, you can launch a separate instance of MongoDB with the following
MONGODB_BIN is a
/bin/mongod path preceded by the full path to the unpacked
MongoDB distributive. Here you can even use your system MongoDB package, in
case you have it installed. As a full example,
MONGODB_BIN can have a value
MONGODB_DIR is a path to directory under
RAMDISK_DIR to which this MongoDB
instance should put it's files. For example, it can be just a
As with MySQL,
MONGODB_PORT_DESIRED is a crucial parameter to specify the
correct MongoDB instance to connect to. Remember it as you will need to set
it up in your test suite.
Connecting to Isolated MongoDB Instance
By default MongoDB do not enforce any usernames or passwords so you need to
just use the hostname and port parameters.
- host : 'localhost'
- port :
For example, for PHP Mongo extension, you get a connection to this instance
Shutting Down the Isolated MongoDB Instance
As you provided the
--pidfilepath commandline argument when launching the
MongoDB server, the following command should do the trick:
Essentially we are feeding the
kill command with the contents of pidfile
and removing it afterwards.
Bash scripts to automate the sandbox starting and stopping
There is a GitHub repository with the example scripts nicely laid out
along with the comments.
There are three scripts:
db_sandbox_properties.sh: this is the all variable parameters you need to properly setup the sandboxes.
db_sandbox_start.sh: this script you run before your test suite.
It was updated with the command to copy the tables schema from source MySQL
instance to sandbox MySQL instance, so, if you have an accessible up-to-date
MySQL instance with the schema for your tests, this script will copy schema
to sandbox so you will have a DB ready to accept test cases.
NOTE that you will need the sudo rights for your unprivileged user to successfully
mount the ramdisk. If you do not have them, you can hack the
in any way you see sufficient to successfully mount the ramdisk.
db_sandbox_stop.sh: this script you run when you don't need the sandbox anymore.
It'll stop both MySQL and MongoDB and unmount the ramdisk (note that you'll
need the sudo rights for this, too).
Допустим, хочется что-то запускать прямо при старте Debian. Вот необходимые
действия для этого:
/etc/init.d/myscript должно быть по минимуму следующее:
Подробности об этой шапке можно прочитать в вики Debian. Название скрипта
myscript должно быть в точности повторено в названии файла скрипта, в поле
Provides и при вызове
Ранлевелы можно посмотреть здесь: http://wiki.debian.org/RunLevel.
That's how you get the name of last tag applied to current branch in Git repo:
We need to meddle with
git describe gives us additional info
in the form of
Note the funny literal 'g' before the
And that's how you get the list of changes since some
COMMIT till the current
state of the working copy, in really pretty format
"ISO Date (Author) Commit :
HEAD is literal "HEAD" there. You can substitute
COMMIT token with either
commit hash or tag.
Now you write the following script and place it in the root of the codebase
of your project:
Name it as
changelog and then you can do just:
And get something like this:
And this will be changes only since last tag applied. Excellent for quick reports
about current upstream.
Here I save my EMACS config file for myself as backup and for everyone to see.
Today I am packaging some modules used by AFCALC and my own writings.
In the AFCALC GitHub repo I am placing the Wiki page which lists every
external link relevant to the project. Also I plan to write a short article
explaining what's going on in AFCALC to the wide public.
Initial paper which was used as an article accompanying my diploma project
was uploaded to Scribd: Numerical Modelling of the 2D Explosion of the Elliptic
Charge using the Multithreading
Integration module which bundled with AFCALC will be published with the name
Data.Complex.Integrate to the HackageDB.
Theta module which bundled with AFCALC and used by the reference model will
be published with the name
Numeric.Functions.Theta to the HackageDB.
I'll cabalize Integrate and Theta right after finishing this post.
While working on various web projects written completely in PHP, I collected
some custom procedures for more satisfying work.
Often you are in need of trimming some text by set number of characters, but
do not want to trim mid-word. This is a function which cuts text between words
trying to reduce string length to be less than provided value.
Pretty-printing values of variables
So, you are a web developer. Why are you have to use such
Additionally, as we know about the existence of highlight.js, we want
to colorize the output if it is sufficiently complex.
MIME encoding of strings
When sending e-mail letters in real multi-national world you need to declare
character encoding on almost anything. To do this in e-mail headers, you should
wrap your header content in special encoding declaration and convert it to
Base64. To speed-up this process, I wrote small helper function.
Inspired by comment to mb_encode_mimeheader() at php.net.
My current science research (in pursue of Ph.D degree) is in field of mathematical
physics. I am doing some cool stuff with representing the processes of explosion,
filtration and electrochemistry as flows of ideal liquid. There are some hard-boiled
math with Complex Complex-value functions. Project is a program in Haskell
to be able to set initial parameters and get a schematic chart representing
the final area and edges of process. Finding precise form of edges (coordinates
of points at edges) is the current goal.
To make my work public I set up a project at GitHub. Just updated it with READMEs
and such. You may look at draft theses in
article/ directory. They are in
Russian, however. I plan to make an international version when there will be
Project is here: AFCALC at GitHub.
Well, AFCALC development goes smoothly and I reached the important milestone.
Code base substantially shrunk — I was mostly fixing bugs, not adding features.
AFCALC now equipped with one testing model with exp() as transforming function
and one real-world model of explosion made in 1970's by Kotlyar L. M..
It's successfully plotting the simplified version of Kotlyar's model, and it
is very promising result.
I want to pass the following milestones:
- Plotting of full model of Kotlyar's model. It is very important, because there exists the process of calculating the series of additional coefficients for correction function and I want it decoupled from model and added to AFCALC core.
- Simultaneous building of three binaries with
make: one with autotests, one with interactive session and one which plots the whole bunch of charts with different model parameters.
Tomorrow will be the day which gives me plot of full Kotlyar's model.
Just posted two packages to the Hackage. My intentions to it was that
from now everyone from Haskell community can simply google for "Haskell integration
of complex functions" and It will be clear as a day that package complex-integrate
provides them with needed functionality. Second package is a library with implementation
of Theta functions. Hope it helps someone. Details below.
Integration of complex functions
Module for integration of complex functions, which I wrote from scratch because
every numeric library I found (including numeric-tools) could calculate
only integrals of real-valued functions. I placed it under Data.Complex.Integrate,
but maybe there are more suitable place to such a module. It exports a single
function called integrate. Package was called complex-integrate. Today I uploaded
it to http://hackage.haskell.org/package/complex-integrate
Implementation of Theta-functions
Module with implementation of theta-functions, special complex function of
two variables. Details can be found at http://en.wikipedia.org/wiki/Theta_function.
I placed it under Numeric.Functions.Theta. It exports four theta-functions
and a small helper to calculate their second parameter. Package was called
theta-functions. Today I uploaded it to http://hackage.haskell.org/package/theta-functions