Assignment 2: The Common Object

In “Assignment 1: Photograph a Reflection” I started taking on the assignments that Harold Davis proposes in Becoming a More Creative Photographer. The second assignment reads as follows: Your assignment: Pick an everyday object where you live. Make your mind

In “Assignment 1: Photograph a Reflection” I started taking on the assignments that Harold Davis proposes in Becoming a More Creative Photographer. The second assignment reads as follows:

Your assignment: Pick an everyday object where you live. Make your mind a blank and forget everything you know or associate with the object. Try to see it with new eyes. Find out what is interesting about the mundane object, and create an image based on this interest.


The common object

And here we go. Thinking about the assignment took me hours. Taking the shoot above, just a few minutes. I guess that regardless of the photograph, the first thing that stroke me was how unbalanced was the time spent on both parts of the assignment. I decided to not even have the camera with me when I started thinking about the assignment. I am happy now that I did it. I bet I would have taken the probabilistic approach if I have had it with me. I bet I would have shot pictures compulsively and then I would have tried to pick one that fitted the assignment. This was actually and interesting mentality change. I read many times that professional photographers usually have very tight and limited windows to shoot at certain locations trying to beat a deadline. They usually talk about how much planning they have to do ahead to get the image they envision in a race against the clock. For example, check Joe McNally’s video where he deconstructs and hints how much planning ahead for a photo shoot in New Orleans.

The main difficulty of the exercise, for me, was to pick a common object. As you can guess there are tones of them around, which one should I pick? Why should I pick that one and not another. Eventually, I decided to stop thinking about it, and just took a deep breath and picked the first one that came to mind. A red mug. That’s it. A red Seattle’s Best Coffee mug. OK. Who am I to not follow the rules of the assignments. Also the red mug plays nice with the red notebook I use to scribble the ideas for the assignments. So I got down the path of combining the red mug and the red notebook. I could even poorly sketch a mug on my notebook.

Also, the mug is tactile, but it feels way on the other side of the spectrum of the notebook soft red skin. A contrast and duality that expands, contrasts and defines both. Warm and cold, isolating and social, inviting and rejecting, exciting and relaxing, new and old, usable and relegating, personally unique and impersonally factory produced, comfortable touch and finger pain, quick thought and deep reflection, locally made and far away imported, curved and frameless, possibilities and missed paths, heart warming and mind damaging, approachable and impersonal, future certainty and daily scramble, round knob and tall order.

All open interpretations of a daily object with no meaning. Open container where we pour our own meaning, anxieties, and hopes. Framing it, transforming it, redefining it at every moment, into something we conquer by owning the illusionary meaning of the common object.

Having all these set, I had an approximate idea of the image I wanted to produce. The mug, the notebook, and the sketch of the mug. How long did it took me to take the picture. Two minutes plus 5 more for a quick retouch and upload. I only shot ten frames with different relative arrangements between the objects, and angles trying to use the natural light in different ways. From the ten frames, three stood up, and only one survived the post processing. The picture is unlikely a head turner, but for me it was. It showed me another path to photography, the one where you take pictures in your head and then at the end of the process you get your camera out and just materialize that elusive flickering image, even if it is just a picture of a common object.

Assignment 1: Photograph a Reflection

Halsman defines creativity and imagination as Une Tournure D’Esprit in his book The Creation of Photographic Ideas. He also proposed a set of rules to help developed your photographic creativity. Some target your logic thinking, some target your unconscious. I

Halsman defines creativity and imagination as Une Tournure D’Esprit in his book The Creation of Photographic Ideas. He also proposed a set of rules to help developed your photographic creativity. Some target your logic thinking, some target your unconscious. I was tempted to start practicing Halsman rules, but they felt a bit daunting and I was a bit lost on where to start. Where they written in order? Should I just focus on one of the rules? Should I stop thinking about it or just stop thinking about it and let it permeate for a while? Hence, I decided to dig a bit more around just to see if I could make my mind about what path to take.

After reading a few, not very interesting webpages and blog post, I finally run into Becoming a more creative photographer by Harold Davis. A collection of seven articles written back in 2009 which target how to help you develop your photographic creativity. And yes, the bottom line is to keep shooting no matter what, as you may have guessed. However, each of the seven articles target a specific topic for development. Also, along the way each article lists a sequence of assignments for you to practice around the discussed topic. If you have some time and curiosity, go and check it out. I am sure you will find something useful there. “Expecting the unexpected” is the first article of the series. The first assignment on the list reads as follow:

Your assignment: Photograph a reflection (in water, in a mirror, etc) so as to convey an entirely different world.


Ephemeral doorway reflections

Actually, when I read about this one I just smiled. I did it even before I read the article and decided to take the assignments route. This looked promising. I had the opposite problem, now I had to choose one reflection picture that conveyed the idea of an entirely different world.

Talking about open interpretations in a sentence. I guess that is the goal, to force you to explore all possible interpretations of your images and see where do they take you. I eventually chose the picture shown above. What eventually pushed me to choose this one over a few others was that it shows both worlds, but with a subtle, ephemeral, and easy to miss reflected world. Moreover, it showed the reflected world as a narrow, flickering, and easily missable invitation to get you transported to a realm of unknown and uncertain rules. To a time long passed. To the place where your actions shaped your path. To the spot in time that enabled you to now be there staring at the ephemeral doorway to hypotheticals.

cGA, Parallelism, Processes, and Erlang

Back in Fall 2006 I was lucky to be at the right place, at the right time. Kumara Sastry and David E. Goldberg were working to pulverize some preconceptions about how far you could scale genetic algorithms. As I said,

Back in Fall 2006 I was lucky to be at the right place, at the right time. Kumara Sastry and David E. Goldberg were working to pulverize some preconceptions about how far you could scale genetic algorithms. As I said, I was lucky I could help the best I could. It turned out that the answer was pretty simple, as far as you want. The key to that result was, again, built on Georges Harik’s compact genetic algorithm. The results were published on a paper titled Toward routine billion-variable optimization using genetic algorithms if you are curious.

Anyway, back on track. A few days ago, I was playing with Erlang and I coded, just for fun, yet another cGA implementation, now in Erlang. The code was pretty straight forward, so why not take another crack at it and write an Erlang version that uses some of the ideas we used on that paper.

The idea we used on the paper was simple. Slice the probabilistic model into smaller segments and update all those model fragments in parallel. The only caveat, if you go over the cGA model, is that you need the evaluation of two individuals to decide which way to update the model. Also, you need to know when to stop, or when your global model has converged. The flow is pretty simple:

  1. Sample in parallel two individuals.
  2. Compute the partial evaluation (in the example below the beloved OneMax).
  3. Emit the partial evaluations.
  4. Collect the partial evaluation, and compute the final fitness.
  5. Rebroadcast the final evaluation to all model fragments.
  6. With the final evaluations at hand, just update the model fragments.
  7. Compute if the local fragment of the model has converged and emit the outcome.
  8. With all the partial convergence checks, decide if the global model has globally converged.
  9. If the global model has not converged, continue to (1).

The implementation below is quite rough. It could be cleaned up using functional interfaces to hide all the message passing between processes, but you get the picture. Also, if you look at the implementation below, you may find that the way global fitness and convergence are computed have only one process serializing each those request. You may remember Amdhal’s law, not a big problem with a few thousand model fragments, but as you scale up you are going to eventually have to worry about. For instance, you could improve it, for instance, by using a broadcast tree. Anyway, let’s put all those a side for now, and do a simple implementation to get the ball rolling.

-module(pcga).
-export([one_max/1, cga/6, accumulator/4, has_converged/3, cga_loop/8, time/4]).
 
 % Accumulates the partial evaluations.
accumulator(Pids, Values1, Values2, Groups) when length(Pids) == Groups ->
  Acc1 = lists:sum(Values1),
  Acc2 = lists:sum(Values2),
  lists:map(fun(P) -> P ! {final_eval, Acc1, Acc2} end, Pids),
  accumulator([], [], [], Groups);
accumulator(Pids, Values1, Values2, Groups) when length(Pids) < Groups ->
  receive
    {eval, Pid,	Value1, Value2} ->
        accumulator([Pid | Pids], [Value1 | Values1], [Value2 | Values2], Groups);
    stop -> ok
  end.

% Convergence checker.
has_converged(Pids, Votes, Groups) when length(Pids) == Groups ->
  FinalVote = lists:sum(Votes),
  lists:map(fun(P) -> P ! {final_converged, FinalVote == Groups} end, Pids),
  has_converged([], [], Groups);
has_converged(Pids, Votes, Groups) when length(Pids) < Groups ->
  receive
    {converged, Pid, Vote} ->
      has_converged([Pid | Pids], [Vote | Votes], Groups);
    stop -> ok
  end.

% OneMax function.
one_max(String) -> lists:sum(String).
 
% Generates random strings of length N given a Model.
random_string(Model) ->
  lists:map(fun (P) -> case random:uniform() < P of true -> 1; _ -> 0 end end,
            Model).
 
% Generates a random population of size Size and strings of length N.
initial_model(N) -> repeat(N, 0.5, []).
 
% Given a pair of evaluated strings, returns the update values.
update({_, Fit}, {_, Fit}, N, _) ->
  repeat(N, 0, []);
update({Str1, Fit1}, {Str2, Fit2}, _, Size) ->
  lists:map(fun ({Gene, Gene}) -> 0;
                ({Gene1, _}) when Fit1 > Fit2 -> ((Gene1 * 2) - 1) / Size;
                ({_, Gene2}) when Fit1 < Fit2 -> ((Gene2 * 2) - 1) / Size
            end,
            lists:zip(Str1, Str2)).

% Check if the model has converged.
converged(Model, Tolerance) ->
  lists:all(fun (P) -> (P < Tolerance) or (P > 1 - Tolerance) end, Model).

% The main cGA loop.
cga(N, GroupSize, Groups, Fun, Tolerance, Print) 
  when N > 0, GroupSize > 0, Groups > 0, Tolerance > 0, Tolerance < 0.5 ->
  Acc = spawn(pcga, accumulator, [[], [], [], Groups]),
  Con = spawn(pcga, has_converged, [[], [], Groups]),
  lists:foreach(
    fun(_) ->
      spawn(pcga, cga_loop, 
            [N, GroupSize, Fun, initial_model(GroupSize), Tolerance, Acc, Con, Print])
    end,
    repeat(Groups, 1, [])).
 
cga_loop(N, Size, Fitness, Model, Tolerance, Acc, Con, Print) ->
  [{Str1, P1}, {Str2, P2} | _] = lists:map(
    fun (_) -> Str = random_string(Model), {Str, Fitness(Str)} end,
    [1,2]),
  Acc ! {eval, self(), P1, P2},
  receive
    {final_eval, FF1, FF2} ->
      NewModel = lists:map(fun ({M, U}) -> M + U end,
      lists:zip(Model, update({Str1, FF1}, {Str2, FF2}, Size, Size))),
      case converged(NewModel, Tolerance) of
        true -> Con ! {converged, self(), 1};
        false ->  Con ! {converged, self(), 0}
      end,
      receive
        {final_converged, true} -> 
          case Print of 
            true -> io:fwrite("~p\n", [NewModel]);
            _ -> true
          end,
          Acc ! Con ! stop;
        {final_converged, false} -> 
          cga_loop(N, Size, Fitness, NewModel, Tolerance, Acc, Con, Print)
      end
  end.

The code above allows you to decide how many model fragments (Groups) you are going to create. Each fragment is assigned to a process. Each fragment has GroupSize variable of the model and N is the population size. A simple example on how to run the code:

c(pcga).
pcga:cga(50000, 500, 10, fun pcga:one_max/1, 0.01, true).

The model will contain 50,000 variables split into 10 process each of each containing a fragment of 50 variables. I guess now the only thing left is measure how this scales.

Une Tournure D’Esprit

I still do not own a copy of Henri Cartier-Bresson‘s The Decisive Moment. It seems hard to get a decent copy at a reasonable price. However, Philippe Halsman‘s The Creation of Photographic Ideas has been on my shelves for quite

Salvador_Dali_A_(Dali_Atomicus)_09633u.jpg

I still do not own a copy of Henri Cartier-Bresson‘s The Decisive Moment. It seems hard to get a decent copy at a reasonable price. However, Philippe Halsman‘s The Creation of Photographic Ideas has been on my shelves for quite a while now. In the book opening, Halsman defines creativity and imagination as une tournure d’esprit—or a mental attitude and ability which can be directed developed. Although this resonated strongly the first time I read the book, successive readings have let me to a different crossroad. A crossroad I am not sure how to approach. Philippe writes bluntly

“Those photographers who take pictures belong to the candid photography school. Their greatest representative is the Frenchman Henri Cartier-Bresson who never interferes in the action of the photographs and whose unobtrusiveness is so unique that it has created a legend that, at the moment of the picture taking, Cartier becomes invisible. Similarly, the amateur who is photographing a baby in the crib is not making a photograph but taking it.

The problem of taking and making photographs are completely different. In the first case, the photographer is a witness to the occurrence; in the second case, he is its creator.”

The rest of the book is solely target to boost and develop two facets of creativity applied to photography: (1) oil the logic thinking mechanisms that help idea creation, and (2) seeding the subconscious for spontaneous blooming—or as he writes it down, stimulation. This is not much different from other creative disciplines. It also plays well with divergent and convergent thinking cycles that permeate almost all creativity literature. But all this rambling is beside the point. The main issue I run into every time I read the book again is the uncomfortable dichotomy between taking and making photographs. He goes further down the path of the dichotomy making a literature analogy.

“The photographer who takes the picture is a visual reporter. The photographer who makes one is a visual author.”

Reporters versus authors. Documentary versus fiction. The premise echoes labeling and struggles. And that is the main reason I keep rereading Halsman’s book, regardless if it makes me feel uncomfortable. The book whispers, you should take a stand. The book seems to defy you to take sides. Do you want to become a reporter or an author? It is not about the glamour, real or perceived, assigned to each of these labels. It is the struggle. The struggle that you define yourself by willingly choosing on of these two opposed worlds. It is the Aristotelian dichotomy that labels introduce in life.

Do you want to take or make photographs? I do not have an answer and that is an itch hard to scratch. I guess that most of the time I have been taking pictures. Maybe, I should treat Philippe’s book as a challenge, not as a choice I have to make, but as an enriching experience. Maybe experimenting each of the rules he so clearly outlines will make the itch go away. Isn’t experience also une tournure d’esprit after all?

Jacques Henri Lartigue

I always thought that if I wrote about photography I would likely be about some of Henri Cartier-Bresson‘s photographs that I cannot shake off. Instead, I am writing about the first picture that got me puzzled to the point I

I always thought that if I wrote about photography I would likely be about some of Henri Cartier-Bresson‘s photographs that I cannot shake off. Instead, I am writing about the first picture that got me puzzled to the point I needed to know how. How Jacques Henri Lartigue took such a grasping photo. The photo you can see below.


Jacques Henri Lartigue

Spectators on the side of the road appear angled towards the left. The rear wheel of car number 6 deformed into an ellipse with a semi-major axis leaning on the opposite direction, away from the spectators. The construct gives the frame and incredible sense of speed and urgency. Having said that, this is not what got me staring at it again and again. What keep me intrigued was how could he shoot such a photograph and get such opposing lines on moving objects forming such a surreal v.

I sketched a few theories. I read a bit more, hard task when you try to avoid the answer. I barely knew anything about taking pictures then, not that now I know any better now. Yes, panning while shooting may definitely have something to do with it. So I took my DSLR to a street corner and started taking pictures of cars driving by while panning. Frames showed the blurry background, the cars were crisply focused, but I could not reproduce the opposing angles between the background and the moving objects. Now I needed to know, there was not way I was going to give up now. Then, it hit me. Maybe his and my camera were not close relatives at all. And yes, I went and I read some more about the cameras and hardware used when Jacques was taking his pictures. Eventually, I found the missing piece. Suddenly, everything felt into place. I was so painfully obvious now.

I am not going to spoil the joy of figuring it out on your own. You should definitely do the exercise. Definitely, it is much more rewarding than getting the answer. On another note, a while back I got an awesome gift in form of a nice compact collection of Jacques’ pictures in Thames & Hudson’s Photofile series book. Worth checking it out if you can get a copy. Be careful though, some of those pictures may grab your thoughts for a while.

Yet Another cGA Implementation, Now in Erlang.

Wanna have some Sunday afternoon fun? Just refresh your Erlang skills. Since this is me having fun, what better way to do so than to write yet another implementation of the compact Genetic Algorithm originally (cGA) proposed by Georges Harik?

Wanna have some Sunday afternoon fun? Just refresh your Erlang skills. Since this is me having fun, what better way to do so than to write yet another implementation of the compact Genetic Algorithm originally (cGA) proposed by Georges Harik?

I am going to skip describing the original algorithm and focus a bit on how to implement it in Erlang instead. You can find some nice books elsewhere and more information on the Erlang site. Erlang is an interesting mix of functional and logic programming languages. If you ever wrote code in ProLog, Erlang is going to look familiar. It will also look familiar if you are coming from Haskell, although, being Erlang a dynamically typed language, you will miss the type system and inference. Nevertheless, give it a chance. It concurrent model is worthwhile reading about. I will it for further posts thought.

Anyway, without further preamble, let’s dive into a naïve implementation of cGA in Erlang. Lists are an integral part of Erlang, hence it seems obvious that individuals could be represented by a list of integers. Under this representation, OneMax is trivial to implement by summing all the elements of the list defining an individual. Following this train of thought, the probabilistic model could also be represented by a simple list of floats (each entry representing the probability of 1 for a given locus).

Given the above description, a cGA implementation would just require: (1) an individual constructor based on sampling the current probabilistic model, (2) a function that given two evaluated individuals compute the model update, and (3) a function to check if the probabilistic model has converged. Once these basic functions are available, writing a cGA boils down to sampling two individuals, compute the updates required based on the evaluated individuals, and update the probabilistic model. This process should be repeated until the model has converged. The Erlang code below shows a possible implementation of such an approach.

% Naive implementation of the compact Genetic Algorithm in Erlang.
-module(cga).
-export([one_max/1, cga/4]).

% OneMax function.
one_max(String) -> lists:sum(String).

% Generates random strings of length N given a Model.
random_string(Model) ->
  lists:map(fun (P) -> case random:uniform() < P of true -> 1; _ -> 0 end end,
            Model).

% Generates a random population of size Size and strings of length N.
initial_model(N) -> repeat(N, 0.5, []).

% Given a pair of evaluated strings, returns the update values.
update({_, Fit}, {_, Fit}, N, _) ->
  repeat(N, 0, []);
update({Str1, Fit1}, {Str2, Fit2}, _, Size) ->
  lists:map(fun ({Gene, Gene}) -> 0;
                ({Gene1, _}) when Fit1 > Fit2 -> ((Gene1 * 2) - 1) / Size;
                ({_, Gene2}) when Fit1 < Fit2 -> ((Gene2 * 2) - 1) / Size
            end,
            lists:zip(Str1, Str2)).

% Check if the model has converged.
converged(Model, Tolerance) ->
  lists:all(fun (P) -> (P < Tolerance) or (P > 1 - Tolerance) end, Model).

% The main cGA loop.
cga(N, Size, Fun, Tolerance) when N > 0, Size > 0, Tolerance > 0, Tolerance < 0.5 ->
  cga_loop(N, Size, Fun, initial_model(N), Tolerance).

cga_loop(N, Size, Fitness, Model, Tolerance) ->
  case converged(Model, Tolerance) of
    true ->
      Model;
    false ->
      [P1, P2 | _] = lists:map(
        fun (_) -> Str = random_string(Model), {Str, Fitness(Str)} end,
        [1,2]),
      cga_loop(N, Size, Fitness,
               lists:map(fun ({M, U}) -> M + U end,
                         lists:zip(Model, update(P1, P2, N, Size))),
               Tolerance)
  end.

% Creates a list of Size repeating Value.
repeat(0, _, Update) -> Update;
repeat(N, Value, Update) -> repeat(N - 1, Value, [Value | Update]).

You can run this code by pasting it into a file named cga.erl. Use the Erlang shell to compile and run cGA as shown below (once you start the Erlang shell via $erl).

1> c(cga).
{ok, cga.}
2> cga:cga(3, 30, fun cga:one_max/1, 0.01).
[0.999, 0.989, 0.098]

A couple of interesting considerations. Compiling and loading code in Erlang support hot code replacement without stopping a running production system. Obviously this property it is not critical for the cGA exercise, but it is an interesting property nonetheless. Another one is that functions, due to its functional programming ancestry, are first class citizens you can pass around. That means that the current implementation done supports you passing arbitrary fitness functions without having to change anything on the cGA implementation.

Finally, I mentioned that this is a naïve implementation to refresh my rusty Erlang syntax. You may want to spent some time profiling this implementation to see how to improve it. Also, you may want to start thinking on how we could take advantage of the concurrency model in Erlang to build a not-so-naive implementation of cGA.

Revamping My Blog

I have been away from my blog for quite a long time. I have barely posted anything compelling in the last three years. Most of the updates were the sporadic announcements to ACM SigEvo’s GECCO conference, but event that was

I have been away from my blog for quite a long time. I have barely posted anything compelling in the last three years. Most of the updates were the sporadic announcements to ACM SigEvo’s GECCO conference, but event that was spotty at best. Yes, like everybody else, I gravitated toward social media (pick your favorite poison here).

I spend quite a bit of time thinking what I wanted to use my blog for. Should it be the same kind of blog? Should I change it under the hood? Should I give it a golden retirement since it seems I have no stories to share anymore? Then in the mist of all this unanswered questions, I realized I wanted my blog to be what it has been all along. It is whatever I need it to be. Yes, some thoughts are faster to share on ephemeral social media outlets, but there are things you want to keep around longer. Hence, I decided to start a face lift as part of this renewed path. Talking about look and feel, I kept it pretty similar as you may have realized. No big changes, mostly layout updates, removing as much clutter as possible, a bit of font sprinkling here and there, but eventually trying to keep it pretty much the same. I guess that I like the cozy feeling of it feeling familiar.

However, one thing I decided to change, after people I care deeply kept insisting that I should, was to build a more permanent home, as I mentioned earlier, for those moments you want to keep around long after the social media rapid timing has digested them into oblivion. Curating photos into gift wrapped packages you find while window shopping was one of those itches that help drove change. As a result of it, you may now see a ‘Photo Stream‘ top main menu entry. It is a running stream of some of the photos I share on my G+ profile. Under this running photo stream, you will find soon some of those gift-wrapped packages containing some of the photos I cannot shake away. Today, I am adding one. Make sure you check it out.

Will this revamping of the blog make me post more often? That is another story.

GECCO 2012 Deadline Extended to January 27

Is that time of year. Rushing to get your papers ready for GECCO 2012? Here are some good news. The deadline has been pushed back to January 27 to help those last minute pushes . You can find more infomaion

Is that time of year. Rushing to get your papers ready for GECCO 2012? Here are some good news. The deadline has been pushed back to January 27 to help those last minute pushes :) . You can find more infomaion at the conference website

http://www.sigevo.org/gecco-2012/

or you can follow GECCO 2012 updatse on Twitter (http://www.twitter.com/gecco_2012), Google+ (+GECCO http://goo.gl/F4ZTM), or Facebook (http://goo.gl/IqEbW).

A Little Functionality and Face Lift

It has been a while since the last face lift to this blog. No, I was not planning any major revamp, but a simple one. Since it was released, I had the little green ShareThis button hanging around. I just

No related posts.

It has been a while since the last face lift to this blog. No, I was not planning any major revamp, but a simple one. Since it was released, I had the little green ShareThis button hanging around. I just wanted to balance a bit the elements on the page. I decided to reposition the button on the top right of the excerpts and on the post themselves. While doing the repositioning I decided to simplify a bit its functionality and replace the button with something a bit lighter and still with purpose. After giving it a bit of a thought I replaced it with Google’s +1 button for publishers. It looks a little best balanced and it does not clutter the layout. You can find more information on how to add the +1 button to your site can be found at the +1 button page for webmasters.

GECCO 2011 Healthier Than Ever

I just got a note from Pier Luca Lanzi about some raw number on this year’s GECCO 2011 conference. The numbers are stagering. I will start with the numbers from the previous editions since GECCO become an ACM conference, just

I just got a note from Pier Luca Lanzi about some raw number on this year’s GECCO 2011 conference. The numbers are stagering. I will start with the numbers from the previous editions since GECCO become an ACM conference, just to build up suspense. The numbers of paper submissions to the main conference for the previous editions are listed below.

Year Number of Submissions
2006 446
2007 577
2008 451
2009 531
2010 373

And after this prelude, here comes the number of submission for this year: 686. Yes you read it correctly 686 papers submitted to the conference. The number just brakes the record of submissions since GECCO joined ACM. 686 papers is 109 papers more than the previous 2007 record.

But, do you want to know what is even better than that? That you can still participate by submitting workshop and late breaking papers. Do you want to learn more about it, just check the conference calendar.