Ari will now tell you everything – literally

Ari learnt something amazing and yet quite simple this week. Tell Ari to “show me all cars” and you will be directed to the website page with our entire inventory. Neat, right?

Everything in one place, all you need to see, and what else could you want?

Remember, Ari can give you highly specialized searches and comparisons, she’s already got that covered. Now all we needed was her to give us a more generic information.

I will take the opportunity this week to clear one or two misconceptions about artificial intelligence.

First let me give you my personal opinion on what I think of AI. I believe that AI should be as humanly as possible. If it were up to me, I wouldn’t create robots as all, I would instead create electronic humans.

We need not be scared of artificial intelligence

The first and foremost thing people ask in a discussion about AI, “is it safe? Will it take over the world like Skynet”,

OK, take a step back. AI in real life doesn’t look anything like sci-fi movies and neither does it function like one. AI is perfectly safe. AI is developed in humans’ image.

Nothing can replace human beings, According to Mark Zuckerberg, the founder of Facebook, AI is there to improve our lives and make things better and not take over the world.

And why would the robots EVER need to take over the world?

Artificial intelligence is actually more commonplace than you think

Many of us have already incorporated AI in our lives without even realizing it. Siri, Amazon Alexa, and Google Home all are example of intelligent personal assistants. Once again, I’d reiterate that AI is nothing like shown in the movies.

In fact the actual form of AI is far more useful in our daily lives but a bit anti-climactic when it comes to showmanship.

Anyways, coming back to Ari.

This week, can you tell us what feature of Ari do you use most?

That’s it from this week. We will be back next week with more scintillating Ari updates.

Ari and advanced pricing filters

The most engrossing aspect of buying a car (or buying anything for that matter) is pricing. For most of us, price is the first thing we think of when we intend to buy something.

When it comes to buying cars, a smart person would look to get maximum out of his or her budget. This can be a confusing task. In order to make an informed decision, one must technically compare all the cars that fall into the desired price bracket.

Image via CBT

But that is virtually impossible? – not quite.

Oh I see, sure you can use some advance search tool with multiple search fields, you can get that list. However, I am not a big fan of traditional search forms. This is the age of artificial intelligence – you must get what you want by just typing one or two words.

Impossible is nothing  

In order to see each and every car that is falling in your price range, you just have to tell Ari “cars between 50K and 100K” – that’s it. You don’t even have to display common courtesy of “please” or “hi”, just get straight to the point, Ari doesn’t keep grudges. Ari is professional and has a very big heart.

Get what you want

I gave you an example of a fancy pricing range of 50K – 100K. However, when I was looking to buy a car, my budget was a less flattering 25K. I decided to look for the best option between 20K and 30K, not sure if Ari could give me anything in my modest range.

But voila! I am on the brink of buying Perodua AXIA 2017 – best choice I found through Ari's algorithms. 

I am glad that Ari is focusing on learning crucial aspects of the car dynamics. Pricing is something very primary. I personally think that it is a very big leap in an artificial intelligence system that she is teaching herself the all-important things. 

She understands car business so well, she knows what matters and what is secondary.

With such fascinating algorithmic thinking, I am beginning to trust Ari that she will make the best choice for me. I don’t worry about buying and selling cars anymore, I have everything at my fingertips thanks to Ari – come what may.

That’s it for this week, do let us know what you think of Ari’s progress in the comment section below.

We will be back with more Ari updates next week.

Ari update: now you can compare different cars side by side

This week our brilliant Ari got even smarter. The ever self-learning wizard surprised us with yet another landmark: she can now give you comparisons between different cars in a heartbeat. All you have to do is say the word.

She is getting more flexible too, when hoping to get some cars’ details, I didn’t have to enter precise cars’ names. “Compare A4 and A6, please” and I got a link to a comparison page detailing the two Audi models.

Gone are the days when one has to go through the painful process of filling a lengthy form in order to get a quick (oxymoronic) comparison.

Hang on, we aren't done yet…

But hold on, Ari is not done giving you the comparison. When she gives you the comparison link she also asks “Want me to add another car to the comparison page?”

Again you only have to enter the car’s name and you will get another link. Now you can compare three cars side by side in great detail, with virtually every possible information about the cars.

In mere seconds you can get all the comparisons you need.  

From expert ratings and engine specifications to fuel consumption and pricing, it's all there.

But why compare different cars?

Whenever I am buying a car, I like to browse through options. My advice to you is; always look through different choices. You should never buy a car without proper research. More often than not you will find that a particular car that looks great in pictures is lacking in specifications that you’d like to have.

In today’s technology dominated world, it makes no sense to visit a car retailer to look at different models. You can do it online. Even online you don’t have to fill out search forms. These are great times to be alive.

How Ari serves you

The thing about artificial intelligence is to give you information that you seek in a more personal way. That is step one. Step two is give you information that you need but haven’t asked for.

There are many things circling your mind that you cannot pen down when doing research. Ari provides you with a set of options, she thinks like a human, she thinks like you.

She will give you recommendations based on your needs. She will identify what else you might be interested in, based on your searching mindset.

Developing AI has one goal only; to eliminate machine-like output. To give you personalized recommendations based on information you provided along with intelligent guesses.

That’s all for this week, have a great time Ari-ing your way to car searching like never before.

If you have any ideas, suggestions and recommendations for Ari, any criticism, do let us know in the comment section below. We’d love to hear from you. It has always been about you and only you.

See you next week with more Ari updates. Adios.

How to automate the Facebook, Google and Fabric SDK setup for different environments in iOS apps

When building mobile apps, it is very common to integrate Facebook, Google or Facebook SDKs. Managing configurations for these SDKs across different environments can become tedious and error prone. Configurations like setting the app key in info.plist (or GoogleService-Info.plist) sit outside of the scope of the code and cannot be handled programatically. As a consequence, we often end up managing the files manually and … making mistakes.

At iCarAsia, we use 4 environments: stack, staging, prepared and production. Everytime we build for one of these environments we need to change the app keys and URL scheme for Fabric, Facebook and Google. In order to limit the chances of error, we decided to automate the process.
To automate the configurations of these SDK’s following changes are required as per their documentations.

To change the facebook, google and fabric SDK’s settings for different environments following things needs to be modified each time build is required.


Setting the FacebookAppID in the info.plist

Setting the FacebookDisplayName in the info.plist


Setting the FacebookAppID with fb prefix as the URL scheme in the info.plist




Google has its own GoogleService-Info.plist. its TRACKING_ID needs to be replaced in that file



Setting the APIkey in info.plist under Fabric key.



To automate this process we are using shell scripts which can easily be integrated in the xcode build process using the Target -> Build phases -> New run script phase

Adding tun script pahse

To invoke any script just place a call to that script file in the shell script placeholder like for example for  facebook it is the folowing code snippet in our project

# Setup Facebook Kit API Keys
. ${PROJECT_DIR}/FacebookKeyAutomationScripts/

which in the xcode project look like this

Facebook script


Note: Just make sure this script phase is after the “compile sources”and “copy bundle resources” phase. So that files which needs to be modified are already copied to the target bundle.

Facebook script phase order

Now turning to the actual code (script) which modifies the files,  Since the concept of writing the script is same for every SDK so lets use the example of the facebook for illustration purpose.


To start off we always create two shell script files

Facebooj Script files

  1. Keys file : Defining the keys for different environments
  2. Automation logic file : The script to change the keys in the build depending on the build environment.

1.    Keys file

This files contains all the keys for the different envirnoments. For us its staging, stack, preprod and production.

# iOSConsumerApp
# Created by Muhammad Tanveer on 9/17/15.
# Copyright (c) 2015 iCarAsia. All rights reserved.
if [ ${TARGET_NAME} = "iOSConsumerApp" ]; then
the_facebook_production_display_name="Carlist - Production"
the_facebook_preprod_display_name="Carlist - Preprod"
the_facebook_staging_display_name="Carlist - staging"
the_facebook_stack_display_name="Carlist - Stack"
#indonesia app keys....
elif [ ${TARGET_NAME} = "iOSConsumerApp-ID" ]; then
the_facebook_production_display_name="Mobil123 - Production"
the_facebook_preprod_display_name="Mobil123 - Preprod"
the_facebook_staging_display_name="Mobil123 - Staging"
the_facebook_stack_display_name="Mobil123 - Stack"
view hosted with  by GitHub

Another benefit of using this file is that if you have different targets than you can differentiate the keys for different targets. Like we have three different targets for three countries and we use that to differentiate the keys which will be used in the second script.
${TARGET_NAME} is one of the variable available in build settings which can be used to differentiate the targets in xcode project.  For the complete list of available build settings please refer to this apple page.


2.   Automation’s logic file

The second phase is to use the keys and perform the required changes according to the SDK requirements. Here is the script to do that

# iOSConsumerApp
# Created by Muhammad Tanveer on 9/17/15.
# Copyright (c) 2015 iCarAsia. All rights reserved.
# Import keys and secrets from a file
. ${PROJECT_DIR}/FacebookKeyAutomationScripts/
if [ ${CONFIGURATION} = "AppStore" ]; then
echo "Release Build Configuration - Set Facebook API Key to Production"
the_current_facebook_api_key=`/usr/libexec/PlistBuddy -c "Print :FacebookAppID" "$path_to_info_plist_file"`
echo "Current Facebook API Key from Info.plist: $the_current_facebook_api_key"
echo "Facebook Production API Key: $the_facebook_production_api_key"
if [ "$the_current_facebook_api_key" == "$the_facebook_production_api_key" ]
# Keys match - do not change
echo "Facebook API Keys match. Will not update"
# Keys do not match - will change
echo "Current Facebook API Key is not the same as new API Key, will change"
/usr/libexec/PlistBuddy -x -c "Set :FacebookAppID $the_facebook_production_api_key" "$path_to_info_plist_file"
/usr/libexec/PlistBuddy -x -c "Set :FacebookDisplayName $the_facebook_production_display_name" "$path_to_info_plist_file"
# Assuming the URl scheme for facebook is the first one in url schemes array
/usr/libexec/PlistBuddy -x -c "Set :CFBundleURLTypes:0:CFBundleURLSchemes:0 $the_facebook_production_url_scheme" "$path_to_info_plist_file"
the_updated_facebook_api_key=`/usr/libexec/PlistBuddy -c "Print :FacebookAppID" "$path_to_info_plist_file"`
the_updated_facebook_display_name=`/usr/libexec/PlistBuddy -c "Print :FacebookDisplayName" "$path_to_info_plist_file"`
the_updated_facebook_url_scheme=`/usr/libexec/PlistBuddy -c "Print CFBundleURLTypes:0:CFBundleURLSchemes:0" "$path_to_info_plist_file"`
echo "Facebook API Key set to: $the_updated_facebook_api_key"
echo "Facebook Display Name set to: $the_updated_facebook_display_name"
echo "Facebook URL Scheme set to: $the_updated_facebook_url_scheme"
elif [ ${CONFIGURATION} = "Release" ]; then
echo "AdHoc Build Configuration - Set Facebook API Key to Preprod"
the_current_facebook_api_key=`/usr/libexec/PlistBuddy -c "Print :FacebookAppID" "$path_to_info_plist_file"`
echo "Current Facebook API Key from Info.plist: $the_current_facebook_api_key"
echo "Facebook Preprod API Key: $the_facebook_preprod_api_key"
if [ "$the_current_facebook_api_key" == "$the_facebook_preprod_api_key" ]
# Keys match - do not change
echo "Facebook API Keys match. Will not update"
# Keys do not match - will change
echo "Current Facebook API Key is not the same as new API Key, will change"
/usr/libexec/PlistBuddy -x -c "Set :FacebookAppID $the_facebook_preprod_api_key" "$path_to_info_plist_file"
/usr/libexec/PlistBuddy -x -c "Set :FacebookDisplayName $the_facebook_preprod_display_name" "$path_to_info_plist_file"
/usr/libexec/PlistBuddy -x -c "Set :CFBundleURLTypes:0:CFBundleURLSchemes:0 $the_facebook_preprod_url_scheme" "$path_to_info_plist_file"
the_updated_facebook_api_key=`/usr/libexec/PlistBuddy -c "Print :FacebookAppID" "$path_to_info_plist_file"`
the_updated_facebook_display_name=`/usr/libexec/PlistBuddy -c "Print :FacebookDisplayName" "$path_to_info_plist_file"`
the_updated_facebook_url_scheme=`/usr/libexec/PlistBuddy -c "Print CFBundleURLTypes:0:CFBundleURLSchemes:0" "$path_to_info_plist_file"`
echo "Facebook API Key set to: $the_updated_facebook_api_key"
echo "Facebook Display Name set to: $the_updated_facebook_display_name"
echo "Facebook URL Scheme set to: $the_updated_facebook_url_scheme"
elif [ ${CONFIGURATION} = "Debug" ]; then
echo "Debug Build Configuration - Set Facebook API Key to Development"
the_current_facebook_api_key=`/usr/libexec/PlistBuddy -c "Print :FacebookAppID" "$path_to_info_plist_file"`
echo "Current Facebook API Key from Info.plist: $the_current_facebook_api_key"
echo "Facebook Development API Key: $the_facebook_staging_api_key"
if [ "$the_current_facebook_api_key" == "$the_facebook_staging_api_key" ]
# Keys match - do not change
echo "Facebook API Keys match. Will not update"
# Keys do not match - will change
echo "Current Facebook API Key is not the same as new API Key, will change"
/usr/libexec/PlistBuddy -x -c "Set :FacebookAppID $the_facebook_staging_api_key" "$path_to_info_plist_file"
/usr/libexec/PlistBuddy -x -c "Set :FacebookDisplayName $the_facebook_staging_display_name" "$path_to_info_plist_file"
/usr/libexec/PlistBuddy -x -c "Set :CFBundleURLTypes:0:CFBundleURLSchemes:0 $the_facebook_staging_url_scheme" "$path_to_info_plist_file"
the_updated_facebook_api_key=`/usr/libexec/PlistBuddy -c "Print :FacebookAppID" "$path_to_info_plist_file"`
the_updated_facebook_display_name=`/usr/libexec/PlistBuddy -c "Print :FacebookDisplayName" "$path_to_info_plist_file"`
the_updated_facebook_url_scheme=`/usr/libexec/PlistBuddy -c "Print CFBundleURLTypes:0:CFBundleURLSchemes:0" "$path_to_info_plist_file"`
echo "Facebook API Key set to: $the_updated_facebook_api_key"
echo "Facebook Display Name set to: $the_updated_facebook_display_name"
echo "Facebook URL Scheme set to: $the_updated_facebook_url_scheme"

First we define the path to the info.plist of the target at line 9 . Then import the keys files defined in step 1 to this scope so that we can use it.

Next we see if the current configuration for the project is “AppStore”. If it is indeed the AppStore build than we get he current key using the helpful utility plistbuddywhich is available on mac to modify and access the plist files.

We than compare the current key with the facebook production key. If the keys matches than there is no need to change anything as the last configuration scheme for the build process was same as the current one as keys are already set properly otherwise we have to update the FacebookAppID, FacebookDisplayName and URL scheme values which is done from line 30-33.


Important thing to mention is this case URL scheme is the first element in the URL schemes array. If your URL scheme for is at second or any other position use that position like “:positionIndex:” insetead of “:0:” in the script.

After that to confirm that the values are changed correctly we get the latest values from the plist and print it to console.

You can confirm the values from the Report Navigator (cmd +8) of the xcode project.

Script Output


The script below line 67 is same as the explained for with different keys.

Same kind of scripts can be written for fabric and google. For you reference the links of these scripts for our app are added at the bottom.

There is one extra step for fabric  that is to upload the dsym build to the fabric which can be added as another build phase to run another script. The script for that is

# iOSConsumerApp
# Created by Muhammad Tanveer on 9/17/15.
# Copyright (c) 2015 iCarAsia. All rights reserved.
# Import keys and secrets from a file
. ${PROJECT_DIR}/Crashlytics_build_phase_run_scripts/
if [ ${CONFIGURATION} = "AppStore" ]; then
echo "Running Crashlytics for this build"
echo "Will upload to production Organization"
"${PODS_ROOT}/Fabric/Fabric.framework/run" $the_crashlytics_production_api_key $the_crashlytics_production_build_secret
elif [ ${CONFIGURATION} = "Release" ]; then
echo "Running Crashlytics for this build"
echo "Will upload to development Organization"
"${PODS_ROOT}/Fabric/Fabric.framework/run" $the_crashlytics_development_api_key $the_crashlytics_development_build_secret
elif [ ${CONFIGURATION} = "Debug" ]; then
echo "Not Running Crashlytics for this build"
echo "Not Running Crashlytics for this build"
view hosted with  by GitHub

Which just invokes the fabric run command as they have described in their docs. We just don’t run it for the development build as we don’t want to upload the DSYM file for crashes during the development. But for QA and production builds it get uploaded automatically.

The gists of other scripts can be found at the following links. ,


Automating the key change process for different environments and countries had a great effect on our workflow. We can now work with confidence, knowing the right keys will be used for each environment. It also saved us significant time on the tedious and error prone task of configuring SDKs.

We will now work on automating the build distribution process for QA and Apple. So that, for instance when QA needs a specific build for a specific country, the correct app is built and sent with one command.

Elasticsearch – Part 1 – Why we chose it?

Understanding our searches and listings.

End of last year, we started working on ways to understand our visitors and one of the things we were interested in, was to understand what they searched and how often they found what they were looking for. On the other side of things, we also wanted to know what happens to the listings our sellers posted, things like how often they show up and in what search queries. Then there were things like, what are the most popular car makes and models, in which regions, and other patterns that we may find. In the end we wanted to correlate all this to understand what is happing on our sites, from the individual listings to the bigger picture. Eventually this will help us to improve our systems, and later integrating this data within the sites may provide an improved experience for both buyers and sellers.

We started exploring ways we could store this data. Getting the data was the easy part as what the user searches comes to our internal API. The challenge was storing the data and in a manner that later on not only be used to understand but integrate back into our system.

There were two important questions we asked ourselves. One was where to store the data and the second one was how to make it meaningful?


So how do we store this data?

As we already were using Apache Solr as the search engine for our sites, our first thought was to somehow enable logs in Solr and get those logs into a format, which we could analyze. 


On our search for something that did this, we came upon the ELK (Elasticsearch, Logstash, Kibana) stack, which almost sounded like what we wanted.

The Logstash would take the Solr logs and dump them into Elasticsearch, the Elasticsearch would allow us to query it, and Kibana would use the Elasticsearch to graph it. Elasticsearch is a search server based on Lucene which provides distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is written in Java and is open source using the Apache License.

We did a run the ELK using Solr logs. It did work but then came the inflexibility. Logs contained only what the user searched but what it didn’t have is where the query generated and other data we may need. Also the other problem was, how to get what the visitor saw as a result of the search. That would require post processing as we would have to pick the search query and fill it with result at a later time.

We put aside ELK for now and started looking for alternatives. We went doing so, ruling out RDBMS, by starting to explore some of the nosql databases and other post processing technologies. We went through MongoDB, Hadoop, HIVE, Cassandra and VoltDB.



One of the solutions that we worked out was involving Cassandra. As Cassandra benchmarks were the best, with the high number of writes in less time and it’s compressing of data on storage therefore requiring less disk space, it seemed almost the thing we needed.

We first created a basic schema, creating collections, for Cassandra and wrote an API endpoint to write some dummy data into it. Then we ran basic load tests using JMeter and storing the data. The writes were great, and the data space taken by Cassandra was low. But while implementing this, we felt we were reworking the schema and rethinking about what we want to store, changing then implementation. One of the thing that was a bit bothering is the post processing we might have to do if we chose Cassandra. As the data would be raw format, we would have to create usable data, working and changing the schema initially to get to the point where we have the desired result. Plus then do post processing on it. Since we are only beginning to explore how the data we wanted, could be used, we needed something that would not require less processing and would bring our data into a format which we can query for aggregations and perform analytical queries on.



We then went back to ELK stack, and decided, we didn’t need Logstash. So just started doing a similar process above for testing, but just using standalone Elasticsearch for now. From our experience with Cassandra, we know the first thing to do was to finalize the fields from the search query we wanted to save, and what data from the search result we wanted to record for each individual search query. All this is anonymous data but having this decided early on was a plus.

We did a rough calculation using the numbers from our New Relic and Google Analytics and came up with a rough number of requests we were anticipating. We then wrote scripts to populate dummy data into Elasticsearch and see the size it takes to store documents (with each document containing average number search parameters and one search result). We had an estimated size of data that we would get but what about the load? So we started by sending concurrent write requests  to the server. We started off with JMeter to create the concurrent requests. With limited success, we were able to test it. Unlike Cassandra, which was on our local server, Elasticsearch we had deployed on an AWS machine. So we ran into a bandwidth bottleneck while running testing. So we decided to run the benchmark form another AWS machine, we had on the network with the Elasticsearch machine. During this time, we moved out of JMeter and started using Apache Benchmark for the concurrent tests. And this is when we decided to go with Elasticsearch. Elasticsearch was easily managing the number of writes we estimated and the data we saw was easy to query. The only concern we saw was the disk size. Our initial assessment showed, 2 TB of data for 3 months (if we had one search query with one search result), which would be a lot more as the search results are normal 10 at average.



Then we came to the question of how to get the data in the Elasticsearch. Of course the API directly writing to Elasticsearch was the easiest solution. But there were three concerns in this.

First was we didn’t want our API endpoints doing extra work and slowing down. Secondly, in case of some delay in Elasticsearch write, we didn’t want our API endpoint to slow down. Thirdly, in case of error in Elasticsearch, we didn’t want it affecting the original endpoint and also have a certain retry mechanism.

To avoid all this, we decided to let our Queue server (Fresque) do the writing to Elasticsearch. Our API would just create a job with the search query and forget about it, without having to do anything else. It was the job’s task to generate the Solr results, process the search query parameters, and do any post processing work and then save it into Elasticsearch. This would ensure our normal site would function as is, but the load would shift to the queue server. I’ll discuss Fresque in detail and other load testing related things in the Part 2 of the article.

Why Elasticsearch and not Solr?

There is the question that was nagging, in our minds, why didn’t we just go ahead and use Solr. Its also based on Apache Lucene, and has great search feature, and we already have experience managing it. Well the answer lies in what we needed in this case. We chose Elasticsearch cause of how it indexes data, the analyzers we can use and also the ability to use the nested and parent-child data in it, but we mainly chose it because of the analytical queries it can do.

Solr is still much more for text search while Elasticsearch tilts more towards filtering and grouping, the analytical query workload, and not just text search.  Elasticsearch has made efforts to make such queries more efficient (lower memory footprint and CPU usage) in both Lucene and Elasticsearch.  Elasticsearch is a better choice for us as we don’t need just text search, but also complex search-time aggregations.

The way ElasticSearch manages shards / replication is also better than Solr, as it’s a native feature within it and more control but we didn’t put that in the consideration, although that itself is a good reason.


So in the end, we went with Elasticsearch, compromising on the high data size, for it’s ability to aggregate data and make the data searchable and also enable us to perform analytical queries, reducing the effort to process data. Elasticsearch can transform data into searchable tokens with the tokenizer of our choice and perform any transformation on it and then index the needed fields. It also supports both nested objects and parent-child objects, which is a great way to make sense of complex data. Then there is the wonderful Kibana. It can plot graphs using ElasticSearch and give us instant meaning.


Next up

Elasticsearch – Part 2 – Implementation and what we learned.
Elasticsearch – Part 3 – A few weeks fast forwarded and the way ahead.

iCar Asia Product And Technology Hackathon Day

Winter is coming.. We are going to have a Hackathon day.. For some reason, both of these sentences meant the same to the fun peeps of iCar Asia Product and IT team. Maybe because this idea was over-discussed and never actually happened (for the year 2015); just like the brothers of the Night’s watch were told too much about the White Walkers. And it was just a myth for the brothers until they actually saw the White Walkers raiding the Hardhome as the Free Folk boarded ships bound for the Castle Black [Game Of Thrones Season 5]. So, we the Product & Technology culture committee members: Faraz, Divya, Yi Fen, Syam, and Salam (myself), made sure the equivalent of the White-Walkers-Raid happened at iCar so that it couldn’t stay a myth anymore. Yes, I’m talking about arranging the Hackathon day for our team.

After ‘how and when are we going to arrange it’ discussion in the culture committee meeting, this is the email I sent out to the team on the 15th April 2015.

Hi Team,

As you all know, we have been discussing about the ‘Hackathon‘  for quite sometime now, let’s actually do it.

The culture committee, as a team, has agreed on making it happen next ‘Tuesday 21st April 2015’ and Joey – our beloved CIO has approved it too. So get your turbo-creativity charged: you’re gonna need it.

There are few rules which we are gonna share with you later this week. The basic idea is to start the Hackathon officially on Monday evening 5 PM: you can form a team, think about the idea, and start working on it on Monday itself (after 5 PM). You can work on your ‘great idea’ until Tuesday 5 PM.

After 5 PM Tuesday, each team (turn by turn) will present whatever they have worked on and then ideas will be ranked based on a preset criteria (Which we’ll share later with you).

Don’t forget, there are prizes too (for the first and the second best teams).

Get ready folks: Winter is coming ;).

Salam Khan

There was a mixed feedback about the email. Many thought it’s just another promise email and nothing is going to happen, however the push and the feel from the culture committee team made them feel that it’s real and not just another promise.

Hackathon Guidelines / Rules

To emphasize on the idea of this hackathon being real, very next day I wrote these guidelines, discussed with the culture committee and shared with everybody in the Product and Technology team. There are some points which were taken as fun but when explained, team agreed to follow those.

Read to enjoy :).

Team Guidelines

  1. Each team must consist of more than 2 members but not more than 5 (follow the Hipster, Hacker, and Hustler approach)
  2. Syam and David cannot be in the same team
  3. Manju and Faraz cannot be in the same team
  4. Arvind and Tanveer cannot be in the same team
  5. Joey and Pedro cannot be a part of any team
  6. Any team cannot consist of more than 2 .NET devs
  7. Any team cannot consist of more than 2 PHP devs
  8. Any team cannot consist of more than 2 QA
  9. Syaiful and Juliana cannot be in the same team
  10. Alain, Geetha, and Celine cannot be in the same team
  11. Sonny and Jackson cannot be in the same team
  12. Albert and Salam cannot be in the same team
  13. Team can be formed anytime now until Thursday 5 PM but the actual work must not start before that
  14. Teams will have 24 hours – From 5 PM Thursday 23rd April 2015 To 5 PM Friday 24th April 2015 to work on their idea
  15. Teams can spend 24 hours in the office if they want to
  16. The output of a team can or cannot be a working software. It can be a prototype, a software, or even a presentation
  17. There must not be any single P&T members left without being a part of the team

Jury and the general rules

  1. Joey and Pedro (and the overall clapping for each team)
  2. Ideas will be rewarded on the base of:
    1. Innovation and creativity
    2. Impact on society
    3. Market viability
  3. Each team will get 5 to 7 minutes (not less than 5 minutes and not more than 7 minutes)
  4. No drug or creativity-enhancing stuff (other than Redbull and Coffee) can be used throughout the Hackathon


  1. First team gets Raspberry pi 2 Model B (for each member)
  2. Second team gets iFlix Annual Subscriptions (for each member) for 2015
  3. All teams will get a certificate of Hackathon participation (for each member)

First draft by Salam, approved by the Culture Committee, and Joey.

We told everybody they only have one day to form their teams and day after tomorrow (from April 22nd) is the Hackathon day. And this time we asked the culture committee members to make sad or angry faces. We did that. And it actually worked.

Hackathon Teams

Within next day we had these 4 teams formed.

ATAMS (Pronounced as ATOMS)

  1. Alain
  2. Tanveer
  3. Ashok
  4. Mayur
  5. Salam

HEAVYWEIGHT (Yeah, most of them are heavyweight indeed)

  1. Syaiful
  2. Bob
  3. Manju
  4. Syamsul
  5. David

The Winning Team (It doesn’t mean they won or something ;))

  1. Fahad
  2. Zeeshan
  3. Wei Fong
  4. Juliana
  5. Celine
  6. Jackson

Juz Bananas (Yeah, whatever, they won!)

  1. Faraz
  2. Shahzad
  3. Daniel
  4. Yi Fen
  5. Arvind
  6. Lakshami

Team Projects.

It really happened. Every group worked very hard and used their creativity to innovate something new. Team Projects were as follows.

Carlist Desktop Chat project


(Langkawi) Travel Mobile App


CanCan Lunch App


Carlist Desktop – One Stop Shop for buyers and sellers


And the first and the second prizes went to…

All four ideas were really appreciated by the Jury and the audience (business people from other departments). But at the end the number one idea was the ‘Chat Project for Carlist‘ which won the Jury’s hearts, followed by the ‘Chat App‘ winning the second prize.

Random clicks

Here you go, some random clicks from the Hack Day.

iCar-Hackathon-Day-1 2 Small
iCar-Hackathon-Day-2 Small
iCar-Hackathon-Day-3 Small
iCar-Hackathon-Day-4 Small
iCar-Hackathon-Day-5 Small
iCar-Hackathon-Day-6 Small
iCar-Hackathon-Day-7 Small

In the end, I would like to use this platform to thank everybody (current and ex alike) at iCar Asia Product team who helped us arrange this amazing hack day. As we all believe that “a journey of a thousand miles begins with a single step” and the first step is always the toughest one, I hope that more of these Hackathons / Hack Days will keep happening at iCar Asia and the fun peeps at Product and Technology team will keep innovating.


Migrate Old URLs to New URL Structure Using Nginx and Redis.

While maintaining a website, webmasters may decide to move the whole website or parts of it to a new location. For example, you might have a URL structure which is not SEO or user friendly and you have to make it one. Changing URL structure can involve a bit of effort, but it’s worth doing it properly.

It’s very important to redirect all of your old URLs traffic to the new location using 301 redirects and make sure that it’s possible to navigate it without running into 404 error pages.

To start with, you will need to generate a list of old URLs and map them to their new destinations. However, this list can grow bigger and bigger depending on the size of your website. Storing this mapping also depends on your servers and size of the website URLs. You can use a database or configure some URL rewriting on your server or application for common redirect patterns.

The problem with database is that it is slow, while file based mapping (by nginx) can take long time just to reload or restart nginx (i.e. need reload or restart nginx in case you add more redirect rules) and also take significant amount of memory depending on size of the mapping file.

Nginx  Redis - Migrate Old URLs to New URL Structure

Nginx + Redis – Migrate Old URLs to New URL Structure

Fortunately, by using Redis and Nginx Lua Module you can make this transaction smooth and the overall migration process – painless.


1 – Install packages nginx-extras & redis-server (
2 – Install
3 – Configure nginx
+ Add the following line at the start of nginx file (replace path with the proper location where you installed openresty module).

lua_package_path "/usr/local/openresty/lualib/?.lua;;";

4 – Add following location block in nginx file :

location ~ "^/[\d]{4}/[\d]{2}/[\d]{2}/(?<slug>[\w-]+)/?$" {

content_by_lua '

local redis = require "resty.redis"
local red = redis:new()

red:set_timeout(1000) -- 1 sec
local ok, err = red:connect("", 6379)
if not ok then

local key = ngx.var.slug
local res, err = red:get(key)

if not res then

if res == ngx.null then

ngx.redirect(res, 301)



How does it work?

lua_package_path "/usr/local/openresty/lualib/?.lua;;";

This line is to tell nginx to load lua module as you are intended to use lua script in you configuration.

location ~ "^/[\d]{4}/[\d]{2}/[\d]{2}/(?<slug>[\w-]+)/?$"

This line is to check all requests with old URL pattern to fall under this block with a lua variable (i.e. slug)

local redis = require "resty.redis"
local red = redis:new()

red:set_timeout(1000) -- 1 sec
local ok, err = red:connect("", 6379)
if not ok then

Above lua script will try to connect with Redis server (on host 127.0.01 and port 6397) with 1 second timeout.

local key = ngx.var.slug
local res, err = red:get(key)

Above lua script will get key from Redis (i.e. usgin lua variable slug which we got from regex)

Rest is quite self explanatory as it will will redirect with 301 if found or with 404 if not.

+ NOTE: In example above replace regex, redis server host & port according to your need
a. Above regex is for URL pattern /{year}/{month}/{day}/slug
b. Redis server path (i.e. host and port (i.e. 6379)

Know another or perhaps a better way  to migrate old URLs to new URL structure? Or have used the same method for your website’s URL migration? Share your experience with us through comments. We are always happy to hear from you.