The exhibit Big Band Data now has moved to Madrid and I was required to adapt my project based on Barcelona in order to show data from Madrid.

That was a good opportunity to adapt my project to another city because that would let me comparing behaviors between 2 cities. It took me a few hours to obtain and transform shapefiles from Madrid, retrieve all the data again and re-run all the data transformation process but the result worth it:

Captura de pantalla 2015-03-17 a la(s) 18.28.28

In this occasion not only the city but the neighborhoods and the community of Madrid are also plotted, so you have more insight about the movements of the customers.

As customers are grouped by their origin (based on its postal code), the fact of showing the paths traveled by all the customers from its origin to the point where the purchase is done reveals that some shopping center act as ‘demographic walls’, as customers from the city don’t move to city outsides and customers from neighborhoods don’t go into the city:

Comparing data between Barcelona and Madrid, notice that the behaviors of the customers is pretty similar: the number of transactions increase to a maximum during the previous days before Christmas. It’s then when the activity falls to a minimum. For the rest of the inspected period both cities share a similar behavior. Another expected point is that the amount of expenditure is much greater in Madrid (3x for city’s citizens and around 2x for citizens living close the city) customer-behaviours

Elastic lists is a good technique to navigate for a n-dimensional datasets, allowing to apply faceted browsing in a visual way. I believe the original idea is from Moritz Stefaner, you can see his excellent work about that here.

So far it seems there isn’t an implementation of Elastic lists with d3.js, so here it is a first approach, check it out here.

Captura de pantalla 2015-02-12 a la(s) 22.40.22

Source code is available, use it as you wish!

Recently I was asked to develop an online interactive piece for the University of La Rioja (UNIR) that allowed readers to explore the crimes in Spain on a number of different metrics.

This piece provides an interactive map, allowing the user to display the crime distribution in Spain based on different criteria such as crime type and subtype, crime status and year.

It was a funny assignment as it covered all the process: starting with preparing all the data and ending with providing visualization and interactivity of the piece.

Link: Crime distribution in Spain

Crime map distribution in Spain

logo-big-bang-data-black-390x51

The Exhibition BIG BANG DATA explores the phenomenon of the information explosion we are currently experiencing. The last five years have seen the emergence of a generalized awareness among academic and scientific sectors, government agencies, businesses and culture that generating, processing and above all interpreting data is radically transforming our society.

It was a pleasant surprise for me when I knew that my project Barcelona Commercial Footprints would be exposed there. It doesn’t happen every day to have a project sharing the same space with projects made by Aaron Koblin or Vizzuality, or with historical visualizations made by Florence Nightingale, John Snow or the famous Charles Joseph Minard.

The exhibition reflects really well the state-of-the art of the data paradigm that we are experiencing on our daily life, on how our lives are changing because of it. Worth to visit it if you are interested in data!

And here some pictures:
IMG-20140811-WA0023

bcn_commercial_footprints
Another Data Visualization project made for another challenge, this time was the Innova Challenge Bidg Data, where the BBVA bank opened its real trade data for the first time in history.

Packt Publishing contacted me to write a little review about its new book about OpenRefine. After giving me an e-copy of the book, I consider the book a good demonstration of what you can do with this software, and is a good resource to have it as a reference. For completeness, here it goes the review I added on Amazon:
openRefineBookCover“Good book about OpenRefine, readable as well and technically accurate. The provided dataset is also useful and it serves as a good example of what one can find in the real world.

For non-technical users and those not used to work with data, the book helps with the steep learning curve that one can face with OpenRefine. For users that already work with data in his daily work, is a good chance to introduce OpenRefine to their data processing pipeline.

The book and all its recipes covers all the basic (and not so basic) topics of OpenRefine, so gives to the user a good knowledge of what can be accomplished with the software. After covering all the essential topics, also offers a detailed introduction to the Regular Expressions and GREL, which improves exponentially the user’s hability to work with data. Good complement to the existing OpenRefine resources that already exist on the net.”

For the data visualization project I made for the International Challenge BBVA I was in the situation of having a dataset with a large list of countries from whom I need their corresponding latitudes and longitudes. The list contains almost all the countries of the world, so I needed an automated solution to get this data.

Open Refine suits very well for this kind of tasks, as supports the ability to fetch JSON from any web service based on values in a OpenRefine project and create a new column out of it. So all we need is to use the existing data we already have and send it against a webservice. For that case we will use the Google’s Geocoding service.

Pages: 1234
About me

Data Visualization · Interactive · HCI · Open Data · Data Science · Digital Humanities


More info here and here: