30 11 / 2017

I am trying to add authentication to my API. Have been reading a lot about JWT and how popular it has been, but somehow I haven’t been able to get my head around a few things. I have some fundamental questions about JWT, why is it superior and the de facto standard for API authentication these days. Really grateful to anyone who can help me with these questions - please add your comments below!

Below are the possible approaches:

Using API token per user:

  1. There is an endpoint (https) where you can send email and password to generate an API token for the given user. The token scopes all subsequent API calls to the given user.
  2. The token obtained in (1) is used in all API calls for the given user and is stored locally on the client.
  3. If password changes (or in case I want to invalidate token on some other event) I reset the token and user needs to log in again and regenerate an API token. I can also associate an expiry with it if I need to.

Using JWT:

  1. There is an endpoint (https) where you can send email and password to generate a JWT token with user info in it and an associated expiry.
  2. The JWT obtained in (1) can be used for all subsequent API calls, but since I cannot log out user on expiry of JWT (bad UX) and I cannot directly invalidate a JWT in contingency, I’ll need to use refresh tokens as well.
  3. Now if the token is expired, generate a new JWT token using the refresh token. I see this as an extra API call which adds to latency and is an overhead. Now use this new token in subsequent calls.
  4. On password change reset the refresh token to force login.

Questions:

  1. How is JWT superior to the API token approach?
  2. Both hit the DB (JWT for refresh tokens).
  3. JWT has an additional overhead to maintain refresh tokens, and more importantly an extra call to get a new JWT using refresh token on each expiry. In fact some approaches suggest a token refresh call before each API request to extend expiry of JWT.
  4. Invalidating JWT is tricky, and hence the refresh tokens. Isn’t it doing the same thing (DB hit) in a much more complicated manner.
  5. The JWT could grow in size and we’ll need to send it with every request. So the concept of avoiding DB hit is sort of incorrect (catch-22) - we cannot have everything in the payload, so we will anyways need to query DB.
  6. Few people might say that JWT might not be suited for my use case, but I would like to know what use cases is it suited for. Also in particular which mechanism is better for authenticating APIs where there is a concept of user who needs to log in.
Comments

02 9 / 2017

Given the minimalist node philosophy this doesn’t come with the package, and given my rails background it’s hard for me to imagine a world without migrations. In case you feel the same way, below is a quick start guide to get you up and running with migrations in a node app within two minutes:

Comments

10 6 / 2016

This will work with pre 1.0.0 versions where active_admin used metasearch gem for filters. Below is how you can add a custom field as filter and use it to generate search results using a custom search function:

For me active_admin did break down when we added a polymorphic association to an existing model, and the filter on association stopped working. We suddenly started seeing “no implicit conversion of nil into String“ exceptions. I then had to figure out how to write a custom filter for active admin - and believe me it was like walking in the Sahara Desert =P , with little/no documentation around.

Do let me know in case this helped you and I saved you from getting lost in the great Sahara ;)

image
Comments

15 10 / 2014

It all started with looking at a text only loader for a Linux package on the terminal, and ended in a heart warming video about a rescue operation, written purely in ruby! No this isn’t a clickbait, but you need to execute (see) it to believe it ;)

Below are some cool Ruby one liners : “text only loader/processing/progress indicators” && “some FUN”

Share yours in the comment section! Do you have a better story to tell?

Posted by @amitsxena

Comments

14 10 / 2013

Writing this down as I always forget this, and this search query is not very google friendly either (and I usually waste a lot of time looking for this, and probably so would others). This is how you do it:

sample_object.id.generation_time

Now the bonus stuff! If you wan to do range queries for created_at timestamp on the BSON id, this is how you do it:

Comments

21 9 / 2013

Wiser people have said - if you have a problem, break it down! If the problem is bigger, break it down further. This is true both philosophically and technologically.

I’ll describe a technological incident here in the hope that it could help someone else as well (when they land in this tricky situation, and are looking for a quick resolution). We use delayed_job gem in our web apps for asynchronous processing. It has performed pretty well for us all this while, but one fine day it caved in. We had a barrage of jobs into the dealyed_job table, and the job processing daemons were just unbearably slow. The reason being that there were 600K jobs in the queue, and the rate at which dealyed_job daemons fire the mysql queries every 5 seconds was too much I/O for the database server. Soon the slow query logs were flooded with update queries. I tried building a few indexes to speed up some of those queries, but the improvements weren’t significant enough. What now? I filed a bug on the github repo in hope of greater good for future users, but that wasn’t sufficient enough. Those jobs in the queue were time sensitive in nature, and at the current speed would take days, if not weeks to clear out. We needed a quick fix now, and a long term fix (better solution as compared to delayed_job) later.

Solution (quick fix): I went back in time and realized that delayed_job had performed well for a few thousand jobs in the table. That’s it - we needed to pause, and break it down a bit - in this case, the mysql table. So we copied all the data from delayed_jobs table to a new table (backlog_jobs) and cleared the original table (the delayed_job table from where the jobs are picked). Then we copied the first 5000 jobs from backlog_jobs to delayed_jobs and deleted them from backlog_jobs. They were processed pretty quickly in a matter of minutes. Wallah! This was it. Then we wrote a small cron than runs every 1 minute and checks the count of delayed_jobs table. If there are less than 100 rows, it repeats the process, i.e. copies the next batch of 5000 jobs, and deletes it from backup_jobs. This way I got back to the older performance levels. Below are some handy queries that you will find useful:

# To create a copy of table
CREATE table backlog_jobs like delayed_jobs;
INSERT into backlog_jobs (SELECT * from delayed_jobs);
DELETE from delayed_jobs;

# To copy jobs in batches
INSERT into delayed_jobs (SELECT * from backlog_jobs order by id asc limit 5000);
DELETE from backlog_jobs order by id asc limit 5000;

So the next time you face database related issues, break it down a bit. That’s what database partitioning is all about, and that is what everything finally comes down to ;)

Would like to hear what I may have done wrong to start with, and what I could have done better. Don’t hesitate to teach me a lesson (pun intended)… :P Philosophically speaking, life is an eternal learning process ;)

image

image
Comments

14 12 / 2012

Problem: We wanted to add custom validation message, and call the same mysql table column with different names depending on the value of some other field. After much googling I couldn’t find any suitable answers. Posting the solution I came up with, so that it can be of help to others:

Solution: 

I couldn’t find solutions using procs, interpolations, etc. I wrote a custom validator, as all the columns are available there. Below is the sample code:

Please note that linktype and url are columns in the links table.

Also answered a related question on Stackoverflow here:

http://stackoverflow.com/questions/5085284/how-can-i-interpolate-ivars-into-rails-i18n-strings-during-validation/13881719#13881719

Comments

18 4 / 2012

Add the following to initializers/active_admin.rb:

config.default_per_page = 50

Posting it here because I couldn’t find this information in documentation, or elsewhere.

Comments

20 10 / 2011

I keep forgetting it again and again and spend too much time searching for it on google with no success, and finally look at the source code. Then I think that why didn’t I look at the source code in the first place, and why do I most of the times rely on google to solve the problem……what if there was no GOOGLE….. ;)

So here it is…a step in that direction (self help, and maybe help for others whose first instinct is to search on google), for the record:

Koala::Facebook::OAuth.new(app_id, app_secret, callback_url).url_for_oauth_code(:permissions => “email,publish_stream”)

You need to pass the scope as an argument to url_for_oauth_code method.

And for anyone who wants to realize how easy it was if my first instict was to look at the source, here goes the source code:

Comments
blog comments powered by Disqus