Many tutorials show how to do bare metal programming on a Raspberry Pi. Some are
very good, but they tend to have a certain magic vibe on them. “Trust me, just
use the following magic constants and it will work”. I can’t help but ask, hey,
where did this magic come from? What if I wanted to figure out all details by
myself?
We definitely don’t have a shortage of methods for generating
normally-distributed random numbers from a source of uniformly-distributed
random numbers. One of such methods is the so called Polar Method, a variation
of the Box-Muller Transform, which I already described before. You might want to take a look at it
before reading this.
This algorithm is named after George Edward Pelham Box and Mervin Edgar Muller,
who published it on a two-page paper in 1958. The idea was not original, though:
it appeared already in the 1934 book Fourier Transforms in the Complex Domain,
by Raymond E. A. C. Paley and Norbert Wiener. Stigler’s
law strikes again!
We don’t know for sure when the Euclidean Algorithm was created nor by whom, but
it was made famous around 300 BC by the Elements – the magnum opus of Greek
mathematician Euclid. Wikipedia describes it as “one of the oldest algorithms in
common use” and Knuth affectionately calls it “the granddaddy of all
algorithms”.
This is a repository of more or less random programming things, made for my own
amusement and edification. I don’t know how this will evolve over time (if at
all), but I envision this as a collection of interactive visual explanations of
algorithms and data structures.
On the second and final part of this conceptual introduction to Machine Learning (ML), I’ll discuss its relationship with other areas (like Data Science) and describe what I perceive as a common theme among many of the ML algorithms. Emphasis on “what I perceive”: don’t take this as the truth.
“Machine Learning” is not just a buzzword — arguably, it is two. Almost everybody seems to be using Machine Learning (ML) in a way or another, and those who aren’t are looking forward to use it. Sounds like a good topic to know about. I did some nice Neural Network stuff with some colleagues in school in the late 90s. Maybe I could just brag that I have nearly 20 years of experience in the field, but this would not be exactly an honest statement, as I didn’t do much ML since then.
Anyway, this is a fun, useful and increasingly important field, so, I guess it is time to do some ML for real. Here’s the first set of notes about my studies, in which I present some important concepts without getting into specific algorithms.
Once upon a time, Peter John Acklam devised a nice algorithm to approximate the
quantile function (AKA
inverse cumulative distribution function, or inverse CDF) of the normal
distribution. He made the
algorithm freely available, but unfortunately his page describing it has been
timing out for quite a while. So, for reference, here’s a quick overview of his
algorithm.
I could made this part fit into a single paragraph. Serious, just watch:
Light, textures and shaders are part of the rendering state, so they are handled
just as shown in the previous post.
Sure, using them requires the use of subclasses of osg::StateAttribute
that
have not been shown, like osg::Light
, osg::Texture2D
, and osg::Program
,
but the idea is the same. So, just spend some time with the Open Scene Graph
reference documentation and you are done.
In the previous part, we talked about two very important OSG concepts: nodes and
drawables. Now, we’ll deal with a third very important concept: state sets.
These form what I consider the triad of the very fundamental OSG concepts.
Having conceived this text as a practical guide, I was tempted to jump right
into action, with an exciting example program displaying some nifty 3D graphics.
But, also having conceived this text as something useful for “very beginners”, I
resisted this temptation and decided to start with some basic concepts without
which the Open Scene Graph (OSG) would not make sense. So, before talking about
OSG per se, I’ll start spending a little time with a quite fundamental
question.
This is the prologue for a series of posts teaching the most important concepts
for anyone learning to use the Open Scene Graph
(OSG).
Apresentação feita por mim no “Dia da Liberdade de Software” (Feevale, Novo
Hamburgo, RS), em setembro de 02009 e reprisada no TcheLinux 2009 (PUC-RS, Porto
Alegre, RS).
Esse foi um minicurso que eu ministrei na Game Development School (GDS) da
Unisinos, em novembro de 02007. Estão disponíveis os
slides e o código
fonte usado nos exemplos.
Eu sou daqueles que ainda não conseguiram entender o que existe de tão
espetacular no tal XML. Até admito que ele possa ser bom para algumas
aplicações, e talvez até possa ser a melhor opção para um ou outro caso, mas o
fato é que se faz barulho demais sobre isso. Diz-se que “este programa usa
arquivos XML” como se isso fosse um recurso do programa, uma vantagem para o
usuário. Entretanto, o que me parece é que, na maioria dos casos, algum outro
formato mais simples, legível e “escrevível” resolveria o problema com bem mais
facilidade e eficiência.
Trying to decide whether you should install GRUB or LILO? Afraid of making the
wrong decision? The Illustrated Guide to Bootloaders comes to your rescue!
Undergrad Class Assignments
These are some class assignments I did when I was an undergraduate Computer
Science student. If nobody find them useful, they’ll at least help me to
remember some of the assignments I enjoyed most.
The majority of the stuff here (specially the older ones) are in Portuguese.
All the software available here is licensed under the Cursing License (which I
haven’t written yet — but I will write it someday).
VOX is a speech recognition school project I worked in back in 1998. Here’s a
brief description of the project, and most of the artifacts we produced back
then.