On the second and final part of this conceptual introduction to Machine Learning (ML), I’ll discuss its relationship with other areas (like Data Science) and describe what I perceive as a common theme among many of the ML algorithms. Emphasis on “what I perceive”: don’t take this as the truth.
“Machine Learning” is not just a buzzword — arguably, it is two. Almost everybody seems to be using Machine Learning (ML) in a way or another, and those who aren’t are looking forward to use it. Sounds like a good topic to know about. I did some nice Neural Network stuff with some colleagues in school in the late 90s. Maybe I could just brag that I have nearly 20 years of experience in the field, but this would not be exactly an honest statement, as I didn’t do much ML since then.
Anyway, this is a fun, useful and increasingly important field, so, I guess it is time to do some ML for real. Here’s the first set of notes about my studies, in which I present some important concepts without getting into specific algorithms.
Once upon a time, Peter John Acklam devised a nice algorithm to approximate the
quantile function (AKA
inverse cumulative distribution function, or inverse CDF) of the normal
distribution. He made the
algorithm freely available, but unfortunately his page describing it has been
timing out for quite a while. So, for reference, here’s a quick overview of his
algorithm.
I don’t remember when I created my first homepage, but I know that in 1998 I
already had one. It had a mix of content created by myself and those funny
things people shared by email back then. I soon dumped all “third-party” content
and made my homepage a repository of stuff I created.
This was the philosophical and narcissistic section of my homepage, now
converted into a blog post. It has some drops of wisdom uttered by me during the
years (from around 1998 until 2012).
Para o deleite de todos, aqui vai minha coleção pessoal de pérolas recolhidas
durante campanhas políticas.
I could made this part fit into a single paragraph. Serious, just watch:
Light, textures and shaders are part of the rendering state, so they are handled
just as shown in the previous post.
Sure, using them requires the use of subclasses of osg::StateAttribute
that
have not been shown, like osg::Light
, osg::Texture2D
, and osg::Program
,
but the idea is the same. So, just spend some time with the Open Scene Graph
reference documentation and you are done.
“Muito boa noite, amigos! Falamos ao vivo diretamente da Serra Fluminense para a
transmissão de uma emocionante e decisiva partida entre Brasil e União
Soviética. Este embate é esperado com grande expectativa pela torcida, e não
tenho dúvida de que temos grandes emoções pela frente. Junto comigo para esta
transmissão, ao meu lado aqui na cabine, Joélson Seicheles. Boa noite, Joélson!
O que a torcida brasileira pode esperar do jogo de hoje?”
In the previous part, we talked about two very important OSG concepts: nodes and
drawables. Now, we’ll deal with a third very important concept: state sets.
These form what I consider the triad of the very fundamental OSG concepts.
Having conceived this text as a practical guide, I was tempted to jump right
into action, with an exciting example program displaying some nifty 3D graphics.
But, also having conceived this text as something useful for “very beginners”, I
resisted this temptation and decided to start with some basic concepts without
which the Open Scene Graph (OSG) would not make sense. So, before talking about
OSG per se, I’ll start spending a little time with a quite fundamental
question.