10 6 / 2016

This will work with pre 1.0.0 versions where active_admin used metasearch gem for filters. Below is how you can add a custom field as filter and use it to generate search results using a custom search function:

For me active_admin did break down when we added a polymorphic association to an existing model, and the filter on association stopped working. We suddenly started seeing “no implicit conversion of nil into String“ exceptions. I then had to figure out how to write a custom filter for active admin - and believe me it was like walking in the Sahara Desert =P , with little/no documentation around.

Do let me know in case this helped you and I saved you from getting lost in the great Sahara ;)

image
Comments

15 10 / 2014

It all started with looking at a text only loader for a Linux package on the terminal, and ended in a heart warming video about a rescue operation, written purely in ruby! No this isn’t a clickbait, but you need to execute (see) it to believe it ;)

Below are some cool Ruby one liners : “text only loader/processing/progress indicators” && “some FUN”

Share yours in the comment section! Do you have a better story to tell?

Posted by @amitsxena

Comments

14 10 / 2013

Writing this down as I always forget this, and this search query is not very google friendly either (and I usually waste a lot of time looking for this, and probably so would others). This is how you do it:

sample_object.id.generation_time

Now the bonus stuff! If you wan to do range queries for created_at timestamp on the BSON id, this is how you do it:

Comments

21 9 / 2013

Wiser people have said - if you have a problem, break it down! If the problem is bigger, break it down further. This is true both philosophically and technologically.

I’ll describe a technological incident here in the hope that it could help someone else as well (when they land in this tricky situation, and are looking for a quick resolution). We use delayed_job gem in our web apps for asynchronous processing. It has performed pretty well for us all this while, but one fine day it caved in. We had a barrage of jobs into the dealyed_job table, and the job processing daemons were just unbearably slow. The reason being that there were 600K jobs in the queue, and the rate at which dealyed_job daemons fire the mysql queries every 5 seconds was too much I/O for the database server. Soon the slow query logs were flooded with update queries. I tried building a few indexes to speed up some of those queries, but the improvements weren’t significant enough. What now? I filed a bug on the github repo in hope of greater good for future users, but that wasn’t sufficient enough. Those jobs in the queue were time sensitive in nature, and at the current speed would take days, if not weeks to clear out. We needed a quick fix now, and a long term fix (better solution as compared to delayed_job) later.

Solution (quick fix): I went back in time and realized that delayed_job had performed well for a few thousand jobs in the table. That’s it - we needed to pause, and break it down a bit - in this case, the mysql table. So we copied all the data from delayed_jobs table to a new table (backlog_jobs) and cleared the original table (the delayed_job table from where the jobs are picked). Then we copied the first 5000 jobs from backlog_jobs to delayed_jobs and deleted them from backlog_jobs. They were processed pretty quickly in a matter of minutes. Wallah! This was it. Then we wrote a small cron than runs every 1 minute and checks the count of delayed_jobs table. If there are less than 100 rows, it repeats the process, i.e. copies the next batch of 5000 jobs, and deletes it from backup_jobs. This way I got back to the older performance levels. Below are some handy queries that you will find useful:

# To create a copy of table
CREATE table backlog_jobs like delayed_jobs;
INSERT into backlog_jobs (SELECT * from delayed_jobs);
DELETE from delayed_jobs;

# To copy jobs in batches
INSERT into delayed_jobs (SELECT * from backlog_jobs order by id asc limit 5000);
DELETE from backlog_jobs order by id asc limit 5000;

So the next time you face database related issues, break it down a bit. That’s what database partitioning is all about, and that is what everything finally comes down to ;)

Would like to hear what I may have done wrong to start with, and what I could have done better. Don’t hesitate to teach me a lesson (pun intended)… :P Philosophically speaking, life is an eternal learning process ;)

image

image
Comments

08 10 / 2012

This isn’t about the material riches obtained over time, but about the shared wealth among the Ruby community ;)

In case you notice that suddenly all your gems vanish from your system, and “gem list” shows none of the gems that you had installed previously, the details below can help you. In retrospect, I remembered that this was a result of gem update —system,that I had executed some time back. It updated rubygems version, as well as the ruby version used by rubygems, and it was a clean slate (I don’t know why they mention that command before installing a gem, in documentation. It can ruin your day!).

So effectively, I had two executables - gem1.8 and gem1.9.1 in my bin directory. The update had made a soft link: /usr/bin/gem -> /usr/bin/gem1.9.1. You can check which executable is being used by doing which gem, and then ls -al <gem path> to get the executable.

$ gem env

RubyGems Environment:
  - RUBYGEMS VERSION: 1.3.7
  - RUBY VERSION: 1.9.2 (2011-07-09 patchlevel 290) [i686-linux]
  - INSTALLATION DIRECTORY: /var/lib/gems/1.9.1
  - RUBY EXECUTABLE: /usr/bin/ruby1.9.1
  - EXECUTABLE DIRECTORY: /usr/local/bin
  - RUBYGEMS PLATFORMS:
    - ruby
    - x86-linux
  - GEM PATHS:
     - /var/lib/gems/1.9.1
     - /home/amit/.gem/ruby/1.9.1
  - GEM CONFIGURATION:
     - :update_sources => true
     - :verbose => true
     - :benchmark => false
     - :backtrace => false
     - :bulk_threshold => 1000
  - REMOTE SOURCES:
     - http://rubygems.org/
$ gem1.8 env

RubyGems Environment:
  - RUBYGEMS VERSION: 1.8.15
  - RUBY VERSION: 1.8.7 (2011-06-30 patchlevel 352) [i686-linux]
  - INSTALLATION DIRECTORY: /usr/lib/ruby/gems/1.8
  - RUBY EXECUTABLE: /usr/bin/ruby1.8
  - EXECUTABLE DIRECTORY: /usr/bin
  - RUBYGEMS PLATFORMS:
    - ruby
    - x86-linux
  - GEM PATHS:
     - /usr/lib/ruby/gems/1.8
     - /home/amit/.gem/ruby/1.8
  - GEM CONFIGURATION:
     - :update_sources => true
     - :verbose => true
     - :benchmark => false
     - :backtrace => false
     - :bulk_threshold => 1000
  - REMOTE SOURCES:
     - http://rubygems.org/

The Fix:

Create a soft link for the gem1.8 executable, and everything will be as it was:

rm usr/bin/gem

ln -s /usr/bin/gem1.8  /usr/bin/gem

Back to the good old days! I’ll save the rubygems update for some other day, and I dread gem update —system form now on…. :O :O :O

Comments

20 10 / 2011

I keep forgetting it again and again and spend too much time searching for it on google with no success, and finally look at the source code. Then I think that why didn’t I look at the source code in the first place, and why do I most of the times rely on google to solve the problem……what if there was no GOOGLE….. ;)

So here it is…a step in that direction (self help, and maybe help for others whose first instinct is to search on google), for the record:

Koala::Facebook::OAuth.new(app_id, app_secret, callback_url).url_for_oauth_code(:permissions => “email,publish_stream”)

You need to pass the scope as an argument to url_for_oauth_code method.

And for anyone who wants to realize how easy it was if my first instict was to look at the source, here goes the source code:

Comments
blog comments powered by Disqus