Rails/Devise authenticating using AWS/Cognito Identity

Background

For a while now, I’m developing a sort of IoT controller with Rails 4. Until now, Devise was used to authenticate users locally using the Devise’s provided :database_authenticable module.

Things changed recently, and I had to move some features of this IoT controller toward AWS. Authentication against AWS/Cognito Identity is one part of the project.
But Why?” you might wonder. Because we need to extend authentication to other products, using a common user database. Cognito fits the description and helps boost the development on the AWS ecosystem. In other words, we want to be able to use our IoT controller alongside with other new products without depending exclusively on the authentication part of the IoT controller.
This means that local user database will still be used for the application-related data like permissions, session, extended profile attributes or database relationship, but pure authentication happen exclusively on Cognito. I moved to AWS just the first A of AAA. This is a disputable choice, let’s call it team work.

So, the purpose of this post is to synthesize the procedure required in order to:

  • authenticate against Cognito
  • handle passwords manipulation and accounts validation
  • migrate user data with a Lambda triggered by Cognito
  • create new local users if they are not already existing

The following example should work well with recent Rails versions.

Dependencies

Obviously Rails and Devise are required, in addition to the following gem:

gem 'aws-sdk', '~> 3'

Rails 4/Devise adaptation

Environment variables will carry all the AWS specific access information to Rails. Depending on the setup,  /.env file probably already exists in which we can add the two following Cognito Pool info (TLDR; Create a new App client without App client secret) :

AWS_COGNITO_USER_POOL_ID=eu-west-1_234567
AWS_COGNITO_CLIENT_ID=01234567890asdfghjkllqwertzuiop1234567890

where the variables values matches the AWS setup.

We’ll need a new a new Devise strategy, a safe location for this file is in /config/initializers/cognito_authenticable.rb :

require 'aws-sdk'
require 'devise/strategies/authenticatable'

module Devise
  module Strategies
    class CognitoAuthenticatable < Authenticatable
      def authenticate!
        if params[:user]

          client = Aws::CognitoIdentityProvider::Client.new

          begin

            resp = client.initiate_auth({
              client_id: ENV["AWS_COGNITO_CLIENT_ID"],
              auth_flow: "USER_PASSWORD_AUTH",
              auth_parameters: {
                "USERNAME" => email,
                "PASSWORD" => password
              }
            })

            if resp
              user = User.where(email: email).try(:first)
              if user
                success!(user)
              else
                user = User.create(email: email, password: password, password_confirmation: password)
                if user.valid?
                  success!(user)
                else
                  return fail(:failed_to_create_user)
                end
              end
            else
              return fail(:unknow_cognito_response)
            end

          rescue Aws::CognitoIdentityProvider::Errors::NotAuthorizedException => e

            return fail(:invalid_login)

          rescue

            return fail(:unknow_cognito_error)

          end

        end
      end

      def email
        params[:user][:email]
      end

      def password
        params[:user][:password]
      end

    end
  end
end

This strategy will create a new local user if Cognito authenticated it, but the user doesn’t exist locally.

The we can configure Devise to use the strategy by giving it the class name in the file /config/initializers/devise.rb

Devise.setup do |config|

  # (...)

  config.warden do |manager|
    manager.strategies.add(:cognito, Devise::Strategies::CognitoAuthenticatable)
    manager.default_strategies(:scope => :user).unshift :cognito
  end

  # (...)

end

Now Devise should authenticate against the Cognito user database.

The next step would be to allow password reset using the password recovery system of AWS Cognito.

class Users::PasswordsController < Devise::PasswordsController

  skip_before_action :assert_reset_token_passed

  def create

    raise ArgumentError, "Unexpected block given for requested action: #{params.inspect}" if block_given?

    begin

      client = Aws::CognitoIdentityProvider::Client.new
      resp = client.forgot_password({
        client_id: ENV["AWS_COGNITO_CLIENT_ID"],
        username: params[:user][:email]
      })

      session[:reset_password_email] = params[:user][:email]

      redirect_to edit_user_password_path

    rescue

      flash[:alert] = I18n.t("devise.errors.unknown_error")
      redirect_to new_user_password_path

    end

  end

  def edit

    gon.flash_notice = I18n.t("devise.notices.change_password_email")
    super

  end

  def update

    if params[:user][:password].blank?

      flash[:alert] = I18n.t("activerecord.errors.models.user.attributes.password.blank")
      redirect_to edit_user_password_path(reset_password_token: params[:user][:reset_password_token])

    elsif params[:user][:password] != params[:user][:password_confirmation]

      flash[:alert] = I18n.t("activerecord.errors.models.user.attributes.password.mismatch")
      redirect_to edit_user_password_path(reset_password_token: params[:user][:reset_password_token])

    elsif params[:user][:reset_password_token].blank?

      flash[:alert] = I18n.t("devise.errors.verification_code_missing")
      redirect_to edit_user_password_path(reset_password_token: params[:user][:reset_password_token])

    elsif session[:reset_password_email].nil?

      flash[:alert] = I18n.t("devise.errors.verification_code_expired")
      redirect_to new_user_password_path

    else

      begin

        client = Aws::CognitoIdentityProvider::Client.new
        resp = client.confirm_forgot_password({
          client_id: ENV["AWS_COGNITO_CLIENT_ID"],
          confirmation_code: params[:user][:reset_password_token],
          username: session[:reset_password_email],
          password: params[:user][:password]
        })

        session.delete :reset_password_email

        redirect_to unauthenticated_root_path, notice: I18n.t("devise.notices.password_changed")

      rescue Aws::CognitoIdentityProvider::Errors::InvalidPasswordException => e

        flash[:alert] = e.to_s
        redirect_to edit_user_password_path(reset_password_token: params[:user][:reset_password_token])

      rescue

        flash[:alert] = I18n.t("devise.errors.unknown_error")
        redirect_to edit_user_password_path(reset_password_token: params[:user][:reset_password_token])

      end

    end

  end

end

In this case, the sign up procedure is still handled by Rails on the local database (see below), therefore account confirmation (with Devise’s :confirmable  module) is handled by Devise without any changes.

Migrating users from Rails to Cognito

We wanted to migrate a user from the Rails database to Cognito if the user isn’t already existing in the Cognito database. For that at least a new endpoint in config/routes.rb  is required:

​post '/aws/auth',
 to: 'users#aws_auth',
 defaults: {format: 'json'},
 as: 'aws_auth'

For this route it is supposed that controllers/users_controller.rb  have a aws_auth  method:

class UsersController < ApplicationController

  # skip_filter other access restrictions...
  before_filter :restrict_access, only: [:aws_auth]

  # (...)

  def aws_auth

    defaults = {
      id: nil,
      first_name: nil,
      last_name: nil,
      email: nil,
      authentication_hash: nil
    }
    user = User.where(email: aws_auth_params[:email]).first

    if user
      answer = user.as_json(only: defaults.keys)
      answer[:user_exists] = true
      answer[:success] = user.valid_password?(aws_auth_params[:password])
    else
      answer = defaults
      answer[:success] = false
      answer[:user_exists] = false
    end

    respond_to do |format|
      format.json { render json: answer }
    end

  end

  # (...)

  private

    def restrict_access
      head :unauthorized unless params[:access_token] == TOKEN_AUTH_OF_YOUR_CHOICE
    end

end

That’s all for the Rails side, now in Cognito Pool of the AWS console there is a Trigger, in which lambdas can be attached. The User Migration trigger can be set to the following JavaScript lambda:

'use strict';

console.log('Loading function');

const https = require('https');

const attributes = (response) => {
  return {
    "email": response.email,
    "email_verified": "true",
    "name": response.first_name + " " + response.last_name,
    "custom:rails_app_id": response.id
  };
};

const checkUser = (server, data, callback) => {
  let postData = JSON.stringify( data );

  let options = {
    hostname: server,
    port: 443,
    path: "/aws/auth",
    method: 'POST',
    headers: {
         'Content-Type': 'application/json',
         'Content-Length': postData.length
       }
  };

  let req = https.request(options, (res) => {

    let data = "";
    res.on('data', (chunk) => {
      data += chunk;
    });
    res.on('end', () => {
      if ( data ){
        let response = JSON.parse( data );
        console.log( 'response:', JSON.stringify(response, null, 2) );
        callback( null, response);
      } else {
        callback( "Authentication error");
      }
    });
  });

  req.on('error', (e) => {
    callback( e );
  });

  req.write( postData );
  req.end();
}
exports.handler = (event, context, callback) => {

  console.log('Migrating user:', event.userName);

  let rails_server_url = process.env.rails_server_url || "rails.app.your_company.com";

  checkUser( rails_server_url, {
    email: event.userName,
    password: event.request && event.request.password,
    access_token: process.env.rails_server_access_token
  }, (err, response ) => {
    if ( err ){
      return context.fail("Connection error");
    }
    if ( event.triggerSource == "UserMigration_Authentication" ) {
      // authenticate the user with your existing user directory service
      if ( response.success ) {
          event.response.userAttributes = attributes( response ) ;
          event.response.finalUserStatus = "CONFIRMED";
          event.response.messageAction = "SUPPRESS";
          console.log('Migrating user:', event.userName);
          context.succeed(event);
      } else if ( response.user_exists ) {
          context.fail("Bad password");
      } else {
        context.fail("Bad user");
      }
    } else if ( event.triggerSource == "UserMigration_ForgotPassword" ) {
      if ( response.user_exists ) {
        event.response.userAttributes = attributes( response ) ;
        event.response.messageAction = "SUPPRESS";
        console.log('Migrating user with password reset:', event.userName);
        context.succeed(event);
      } else {
        context.fail("Bad user");
      }
    } else {
      context.fail("Bad triggerSource " + event.triggerSource);
    }
  });
};

This scripts use the two following environment variables, which should be set with the proper information:

rails_server_url=URL
rails_server_access_token=TOKEN_AUTH_OF_YOUR_CHOICE

Conclusion

It’s possible.

Raspberry Pi 3 and bluetooth speaker setup [command line]

Streaming sound from a Raspberry Pi 3 to a bluetooth speaker was not as easy as it should. Internet is overloaded with outdated information about this issue. So I compiled a small tutorial for the latest Raspbian (at the date of writing). My setup is composed with the following elements:

Trying to connect the device to the Raspberry Pi with the following code will raise an error:  Failed to connect: org.bluez.Error.Failed

pi@raspi:~# sudo bluetoothctl -a
Agent registered
[bluetooth]# default-agent
Default agent request successful
[bluetooth]# power on
Changing power on succeeded
[bluetooth]# scan on
Discovery started
[CHG] Controller B8:27:EB:XX:XX:XX Discovering: yes
[NEW] Device 40:EF:4C:AA:BB:CC JBL GO
[CHG] Device 40:EF:4C:AA:BB:CC UUIDs:
0000110d-0000-1000-8000-00805f9baabb
0000110b-0000-1000-8000-00805f9baabb
0000110e-0000-1000-8000-00805f9baabb
0000110f-0000-1000-8000-00805f9baabb
0000111e-0000-1000-8000-00805f9baabb
00001108-0000-1000-8000-00805f9baabb
00001131-0000-1000-8000-00805f9baabb
[bluetooth]# pair 40:EF:4C:AA:BB:CC
Attempting to pair with 40:EF:4C:AA:BB:CC
[CHG] Device 40:EF:4C:AA:BB:CC Connected: yes
[bluetooth]# trust 40:EF:4C:AA:BB:CC
[CHG] Device 40:EF:4C:AA:BB:CC Trusted: yes
Changing 40:EF:4C:AA:BB:CC trust succeeded
[CHG] Device 40:EF:4C:AA:BB:CC UUIDs:
00001108-0000-1000-8000-00805f9baabb
0000110b-0000-1000-8000-00805f9baabb
0000110c-0000-1000-8000-00805f9baabb
0000110e-0000-1000-8000-00805f9baabb
0000111e-0000-1000-8000-00805f9baabb
[CHG] Device 40:EF:4C:AA:BB:CC Paired: yes
Pairing successful
[CHG] Device 40:EF:4C:AA:BB:CC Connected: no
[bluetooth]# connect 40:EF:4C:AA:BB:CC
Attempting to connect to 40:EF:4C:AA:BB:CC
Failed to connect: org.bluez.Error.Failed

It is not obvious, but what is missing to the connecting device is the answer from the host saying that it can stream sound. In order to send the correct answer to the device, pulseaudio should be started first and should correctly load the bluetooth module.

First, ensure that all requirements are installed:

pi@raspi:~# sudo apt install pi-bluetooth pulseaudio-module-bluetooth mpg123

Then start pulseaudio an check that the module is indeed loaded:

pi@raspi:~# sudo pulseaudio --start
W: [pulseaudio] main.c: This program is not intended to be run as root (unless --system is specified).
pi@raspi:~# pactl list | grep bluetooth
Name: module-bluetooth-policy
module.description = "When a bluetooth sink or source is added, load module-loopback"
Name: module-bluetooth-discover

At this point, if you try to connect your device, this should work. Most of the devices produce a sound when the connection succeed. Within bluetoothctl , it is not required to redo the pairing procedure, but before continuing you may need to ensure that the bluetooth controller is on with [bluetooth]# power on .

[bluetooth]# connect 40:EF:4C:AA:BB:CC
Attempting to connect to 40:EF:4C:AA:BB:CC
[CHG] Device 40:EF:4C:AA:BB:CC Connected: yes
Connection successful

Load a MP3 on your Raspberry Pi and play it using mpg123 your.mp3 .

If you restart your Raspberry Pi now, your device will fail to connect again.
pulseaudio is not really ment to start system-wide at boot, but in the case of embedded device it makes sense.

You could use /etc/rc.local  but it’s a bit punk, so instead let’s create a systemd service.
Add the following content to this file /lib/systemd/system/pulseaudio.service :

[Unit]
Description=PulseAudio Daemon

[Install]
WantedBy=multi-user.target

[Service]
Type=simple
PrivateTmp=true
ExecStart=/usr/bin/pulseaudio --system --realtime --disallow-exit --no-cpu-limit

And start the service by hand using systemctl start pulseaudio . When running systemctl status pulseaudio , you may find a warning about starting pulseaudio system-wide, but don’t worry, with this setup it makes sense.
Add pulseaudio to the list of services to start at boot: systemctl enable pulseaudio.service

Don’t forget to add both root and pi users to the group pulse-access in /etc/groups :

pulse-access:x:115:root,pi

Note that the group ID 115 may be different on your system.

Reboot your Raspberry Pi and you are good to go, if the speaker in ON, it should be automatically connected to your system.

Let’s encrypt (il buono), rails (il brutto) and heroku (il cattivo)

Il buono

This article is just paraphrasing this one with a bit more accuracy, corrections and cynicism.


First step: Update your rails code on heroku

The route /.well-known/acme-challenge/KEY  should be added to your config/routes.rb  file like so

get ‘/.well-known/acme-challenge/:id’ => ‘CONTROLLER#letsencrypt’

where CONTROLLER  is the controller of your choice, in which the method should look like this

 def letsencrypt
   if params[:id] == ENV['LETSENCRYPT_KEY']
     render text: ENV['LETSENCRYPT_CHALLENGE']
   else
     render text: "nope"
   end
 end

and don’t forget to make it “public”, so if you are using cancancan the following line is required on top of your controller file

skip_authorization_check only: [:letsencrypt]

Push it on heroku

> git push heroku # this may differ depending on your setup

and wait for it be deployed.


Second step: Install require software and generate the key

On ubuntu you can install letsencrypt  command like this

> sudo apt install letsencrypt

The run the command with root privileges

> sudo letsencrypt certonly --manual

follow the instructions and when it asks you to verify that the given URL is reachable, don’t presse ENTER but follow third step instead.


Third step: Update Heroku variables

Go on Heroku console, in settings>Reveal config vars and add LETSENCRYPT_KEY  and LETSENCRYPT_CHALLENGE  keys with their corresponding values from letsencrypt  command, a step before.

Restart Heroku within UI or with the following command where YOUR_APP_NAME  is… your app name.

> heroku restart -a YOUR_APP_NAME

It would be a good idea to try the URL from your browser before coninuing.


Fourth step: Verify the challenge and push certificate to Heroku

If your SSL endpoint is not yet setup on Heroku, take the time and money to do it

> heroku addons:create ssl:endpoint -a YOUR_APP_NAME

Then you will be able to push the certificate to your Heroku instance.

> sudo heroku certs:add /etc/letsencrypt/live/YOUR_DOMAIN/fullchain.pem /etc/letsencrypt/live/YOUR_DOMAIN/privkey.pem -a YOUR_APP_NAME

If it’s a certificate update, replace the certs:add  by certs:update  and your are good.


Fifth and last step: Behold!

Give yourself some time for a walk and think about the beauty of living, yet still away from the coming technological singularity.

Slow Query with “LIMIT 1” in Postgresql

Recently I had issues on a production database running slow on tables with more than 1M rows.
This is a complex Ruby on Rails app connected to a Postgresql database, I went through the whole stack to figure out that it’s a simple index problem. Very few or no related doc was found about this issue, therefor I quickly post a summary of my adventure.

In Rails I often write something simple as ActivityLog.first or Player.find(123).activity_logs.first , where the model ActivityLog have a default scope describe like so:

class ActivityLog < ActiveRecord::Base
  default_scope { order("created_at desc") }
end

This leads to a SQL query looking like this:

SELECT "activity_logs".* FROM "activity_logs" WHERE "activity_logs"."player_id" = 123 ORDER BY created_at ASC LIMIT 1;

This is probably one of the most common query in my Rails applications, and certainly not only in my applications.

Unfortunately, this may become a slow query on big tables. Indeed, if this query is often called your app may become from sticky to unusable (the latter was my case).

The following example illustrate the issue.
I recall it: this may be a very common query:

# EXPLAIN ANALYSE SELECT "activity_logs".* FROM "activity_logs" WHERE "activity_logs"."player_id" = 123 ORDER BY created_at ASC LIMIT 1;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..464.30 rows=1 width=708) (actual time=1124.357..1124.357 rows=0 loops=1)
-> Index Scan using index_activity_logs_on_created_at on activity_logs (cost=0.42..421660.15 rows=909 width=708) (actual time=1124.356..1124.356 rows=0 loops=1)
Filter: (player_id = 123)
Rows Removed by Filter: 1033340
Planning time: 0.084 ms
Execution time: 1124.378 ms
(6 rows)

This query on my dev machine, which is way more powerful than our production VPS takes 1’124ms to run. If called only once per user’s click, this would immediately degrade your Apdex score.

As described in Postresql doc and by reading the EXPLAIN (ANALYZE, BUFFERS) output, a lot of disk access caused by the filter on player_id makes the query slow.

The numbers provided by BUFFERS help to identify which parts of the query are the most I/O-intensive.

# EXPLAIN (ANALYSE, BUFFERS) SELECT "activity_logs".* FROM "activity_logs" WHERE "activity_logs"."player_id" = 123 ORDER BY created_at ASC LIMIT 1;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..464.30 rows=1 width=708) (actual time=1127.860..1127.860 rows=0 loops=1)
Buffers: shared hit=405695 read=445170
-> Index Scan using index_activity_logs_on_created_at on activity_logs (cost=0.42..421660.15 rows=909 width=708) (actual time=1127.858..1127.858 rows=0 loops=1)
Filter: (player_id = 123)
Rows Removed by Filter: 1033340
Buffers: shared hit=405695 read=445170
Planning time: 0.083 ms
Execution time: 1127.880 ms
(8 rows)

The planner took the decision to use an index scan on created_at (the ORDER BY  attribute) and then filter player_id which in this situation isn’t the best choice. Disabling it with SET enable_indexscan = OFF;  improve drastically the performances, but it’s not a production solution.

# SET enable_indexscan = OFF;
# EXPLAIN ANALYSE SELECT "activity_logs".* FROM "activity_logs" WHERE "activity_logs"."player_id" = 123 ORDER BY created_at ASC LIMIT 1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=3395.09..3395.09 rows=1 width=708) (actual time=0.026..0.026 rows=0 loops=1)
-> Sort (cost=3395.09..3397.36 rows=909 width=708) (actual time=0.025..0.025 rows=0 loops=1)
Sort Key: created_at
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on activity_logs (cost=19.47..3390.54 rows=909 width=708) (actual time=0.021..0.021 rows=0 loops=1)
Recheck Cond: (player_id = 123)
-> Bitmap Index Scan on index_activity_logs_on_player_id (cost=0.00..19.24 rows=909 width=0) (actual time=0.019..0.019 rows=0 loops=1)
Index Cond: (player_id = 268)
Planning time: 0.087 ms
Execution time: 0.052 ms
(10 rows)

The query execution time drop down to 0.05ms which is factor of 22’480 from the previous planner decision !

As described here another way to trick the planner is to add another field in the ORDER BY  clause.

# EXPLAIN (ANALYSE, BUFFERS) SELECT "activity_logs".* FROM "activity_logs" WHERE "activity_logs"."player_id" = 268 ORDER BY created_at ASC, id ASC LIMIT 1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=3395.09..3395.09 rows=1 width=708) (actual time=0.038..0.038 rows=0 loops=1)
Buffers: shared read=3
-> Sort (cost=3395.09..3397.36 rows=909 width=708) (actual time=0.037..0.037 rows=0 loops=1)
Sort Key: created_at, id
Sort Method: quicksort Memory: 25kB
Buffers: shared read=3
-> Bitmap Heap Scan on activity_logs (cost=19.47..3390.54 rows=909 width=708) (actual time=0.034..0.034 rows=0 loops=1)
Recheck Cond: (player_id = 268)
Buffers: shared read=3
-> Bitmap Index Scan on index_activity_logs_on_player_id (cost=0.00..19.24 rows=909 width=0) (actual time=0.032..0.032 rows=0 loops=1)
Index Cond: (player_id = 268)
Buffers: shared read=3
Planning time: 0.093 ms
Execution time: 0.060 ms
(14 rows)

Which improve the execution time in a comparable way. In a hurry this the solution a adopted, I have quickly patched my scope that way to improve globally the performances. It was safe and easy to commit with the advantage not to expand database indices size.

class ActivityLog < ActiveRecord::Base
  default_scope { order("created_at desc, id desc") }
end

But for the sake of a better understanding I dug a bit more this issue. I have more than one big table, and I have more than one app online… I don’t like to keep such gray zone and smoking work-around in my code.

First, I tried to help the planner change its decision by tweaking statistics on every column, but it didn’t changed planner decision and didn’t improve the execution time.

# ALTER TABLE activity_logs ALTER COLUMN created_at SET STATISTICS 1000;
# EXPLAIN (ANALYSE, BUFFERS) SELECT "activity_logs".* FROM "activity_logs" WHERE "activity_logs"."player_id" = 268 ORDER BY created_at ASC LIMIT 1;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..474.81 rows=1 width=707) (actual time=1145.167..1145.167 rows=0 loops=1)
Buffers: shared hit=405660 read=445205
-> Index Scan using index_activity_logs_on_created_at on activity_logs (cost=0.42..421251.54 rows=888 width=707) (actual time=1145.165..1145.165 rows=0 loops=1)
Filter: (player_id = 268)
Rows Removed by Filter: 1033340
Buffers: shared hit=405660 read=445205
Planning time: 0.092 ms
Execution time: 1145.187 ms
(8 rows)

Then I realized that simply building an index on player_id, my condition, and created_at, my ordering attribute, should help Postgersql extracting my query while using the actual planner decision.

CREATE INDEX activity_logs_player_id_created_at ON activity_logs (player_id, created_at);

…and the end result is pleasant enough. 0.03ms – my work is almost done.

# SET enable_indexscan = ON;
# EXPLAIN (ANALYSE, BUFFERS) SELECT "activity_logs".* FROM "activity_logs" WHERE "activity_logs"."player_id" = 268 ORDER BY created_at ASC LIMIT 1;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..4.45 rows=1 width=707) (actual time=0.013..0.013 rows=0 loops=1)
Buffers: shared hit=3
-> Index Scan using activity_logs_player_id_created_at on activity_logs (cost=0.42..3570.31 rows=888 width=707) (actual time=0.012..0.012 rows=0 loops=1)
Index Cond: (player_id = 268)
Buffers: shared hit=3
Planning time: 0.099 ms
Execution time: 0.031 ms
(7 rows)

On the Rails side a simple migration like the following change my user’s life, and restored my peace of mind.

class AddIndicesToBigTables < ActiveRecord::Migration
  def change
    add_index :activity_logs, [:player_id, :created_at], unique: false
  end
end