Programming Tech Industry

The How and Why of End-to-End Testing

Perhaps the most significant and under-appreciated aspect of Rails and Agile software development of the last roughly 15 years is the culture and discipline around testing and test-driven development.

I’ve never come to understand why testing and TDD is often maligned by the loudest, most vocal developers: It’s too slow, it takes longer, the boss didn’t want or ask for it, they’ll say.

You don’t hear about these developers or this ideology often in professional circles, but you encounter them quickly in the wild west of freelance development.

[SEIZURE WARNING: There is an animated GIF towards the bottom of this page that flashes. Please disable animated GIFs if you are susceptible to flashing lights.]

Indeed, much of the popular and discussed rhetoric in the community of Rails is about a codebase’s test suite (a suite of tests for your whole application, this collective is called “the tests” or “the specs”): How much of your codebase is covered (as measured in %, which I discuss further below)? How easy are the tests to write? Do they use factories or fixtures? How brittle are they? Do they test the right things and are they valuable?

All of these are correct questions. Although there is no substitute for the day-in-day-out practice of this to become great at testing, I will try to offer some broad ‘best practice’ answers to these questions.

The enlightened developers don’t ask or care about whether or not the boss told us to write a tested codebase. We just know the answers to the above questions and do what’s right for the codebase: write specs.

Testing has varying degrees, varying methods, varying strengths.

In 99 Bottles of OOP, Metz, Owen, and Stankus make this interesting observation:

Belief in the value of TDD has become mainstream, and the pressure to follow this practice approaches an unspoken mandate. Acceptance of this mandate is illustrated by the fact that it’s common for folks who don’t test to tender sheepish apologies. Even those who don’t test seem to believe they ought to do so.

(Metz, et al, 99 Bottles of OOP, Second Edition, 2020. p 43)

So testing exists in a murky space: The top dev shops and teams know it is essential, but its implementation is inconsistent. Sadly, I’ve seen lots of development happen where people either just don’t write tests, write tests blindly, use tests as a cudgel, or skip end-to-end testing altogether.

Many years in this industry have led me to what seems like an extreme position. Not writing tests should be seen as akin to malpractice in software development. Hiring someone to write untested code should be outlawed.

Having a tested codebase is absolutely the most significant benchmark in producing quality software today. If you are producing serious application development but you don’t have tests, you have already lost.

Having a good test suite is not only the benchmark of quality, it means that you can refactor with confidence.

There are two kinds of tests you should learn and write:

  • Unit testing (also called model testing or black-box testing)
  • End-to-end testing (also called integration testing, feature testing, or system tests)

These go by different names. Focus on the how and why of testing and don’t get lost in the implementation details of the different kinds of tests. (To learn to do testing in Ruby, you can check out my course where I go over all the details.)

Unit Testing

Unit tests are the “lowest-level” tests. In unit testing, we are testing only one single unit of code: Typically for Rails, a model. When we talk about Unit testing in other languages, it means the same as it does for Rails, but might be applied in other contexts.

The thing you are testing is a black box. In your test, you will give your black box some inputs, tell it to do something, and assert that a specific output has been produced. The internals (implementation details) of the black box should not be known to your unit test.

This fundamental tenet of unit testing is probably one of the single most commonly repeated axioms of knowledge in software development today.

The way to miss the boat here (unfortunately) is to follow the axiom strictly but misunderstand why you are doing it.

Testing, and especially learning to practice test-driven development (that’s when you force yourself not to write any code unless you write a test first), is in fact a lot deeper and more significant than just about quality, refactoring, and black boxes. (Although if you’ve learned that much by now you’re on the right track.)

Most people think that software, especially web software, is written once and then done. This is a fallacy: Any serious piece of software today is iterated and iterated. Even if you are writing an application for rolling out all at once, on the web there should always be a feedback loop.

Perhaps one of the worst and most problematic anti-patterns I’ve ever seen is when contractors write code, it is deployed, and nobody ever looks at any error logs. Or any stack-traces. Or even at the database records. (Typically this happens less in the context of companies hiring employees because employees tend to keep working for your company on an ongoing basis whereas contractors tend to ‘deliver’ the product and then leave.)

It’s not just about “catching a bug’” here or there. Or tweaking or modifying the software once it’s live. (Which, to be fair, most developers don’t actually like to do.)

It’s about the fact that once it is live, anything and everything can and will happen. As a result, the data in your data stores might get into all kinds of states you weren’t expecting. Or maybe someone visits your website in a browser that doesn’t support the Javascript syntax you used. Or maybe this, or maybe that. It’s always something.

This is the marriage of testing & ‘real life’: You want your tests to be ‘as isolated’ as possible, yet at the same time ‘as realistic’ as they need to be in order to anticipate what your users will experience.

That’s the right balance. Your code doesn’t exist in a vacuum, and the test environment is only a figment of your imagination. The unit test is valuable to you because it is as realistic as it needs to be to mimic what will happen to your app in the real, wild world of production.

With unit testing, you aren’t actually putting the whole application through its places: You’re just testing one unit against a set of assertions.

In the wild (that is, real live websites), all kinds of chaos happens. Your assumptions that user_id would never be nil, for example, proves out not to be the case in one small step of the workflow because the user hasn’t been assigned yet. (Stop me if you’ve heard this one before.)

You never wrote a spec for the user_id being nil, because you assumed that that could never happen. Well, it did. Or rather, it might.

Many developers, especially the ones with something to prove, get too focused on unit testing. For one thing, they use the percentage of codebase covered as a badge of honor.

Percentage of Codebase Covered

When you run your tests, a special tool called a coverage reporter can scan the lines of code in your application to determine if that line of code was run through during your test. It shows you which lines got the test to run over them and which lines were ‘missed.’

It doesn’t tell you if your test was correct, that it asserted the correct thing of course. It just tells you where you’ve missed lines of code. The typical benchmark for a well-tested Rails application is about 85–95% test coverage. (Because of various nuanced factors, there are always some files that you can’t or don’t need to test— typically not your application files.)

Here I use a tool in Ruby called simplecov-rcov to show which lines (precisely, line-by-line, and file-by-file) are covered. Here in this baby little project of mine, I have an unfortunate 36.55% of my codebase covered:

example coverage report

As you see, the files are sorted with the least covered files shown up top. The top files are in red and say “0.00 %” covered because the test suite does not go into that file.

When I click into the file, I can actually see which lines are covered and uncovered, in red & green like so:

Code coverage report showing an untested line of Ruby

(Here’s a great example of that “it only happens in the wild” thing I was talking about earlier. In theory, I should never get passed a room id (params[:room]) that is not in my database [see line 4], but in practice, for some reason, while I was debugging I did. So I added a small little guard to catch for this while debugging, thus making the line of code inside the if statement uncovered by my test suite.)

Correlating the total percentage of test coverage to your code quality and/or value of the tests is often a fallacy: Look at the percentage of codebase covered, but not every day.

The problem with over-emphasis on unit testing is the dirty little secret of unit testing: Unit tests rarely catch bugs.

So why do we unit test at all then? Unit tests do catch all of your problems when you are upgrading.

You should unit test your code for the following four reasons:

(1) It helps you think about and structure your code more consistently.

(2) It will help you produce cleaner, more easily reasoned code as you refactor.

(3) Refactoring will, in turn, reveal more about the form (or shape) of your application that you couldn’t realize upfront.

(4) Your unit tests will catch bugs quickly when you upgrade Rails.

That’s it. Notice that not listed hereis ‘catching regressions’ (or bugs). That’s significant because many developers think unit testing cover all of their bases. Not only do they not cover all of your bases: They don’t even catch or prevent regressions (bugs) in live production apps very often.

Testing is important. Unit testing and end-to-end testing are both important, but between the two, end-to-end testing is the most important of all.

End-To-End Testing

End-to-end testing goes by many names: System specs, integration specs, Capybara, Cypress, Selenium.

End-to-end testing for Javascript applications means the following things:

  1. Your test starts in the database. I like factories, but fixtures are also popular.
  2. Your test ‘hits’ the server (Rails, Node, Java, etc)
  3. The server returns a front-end in Javascript
  4. Your test interacts in Javascript with your web page

If you do not have all four of those components, you do not have end-to-end testing. Using Capybara, you are really doing all of these things.

If you’ve never seen a Capybara test run, here’s what it looks like:

Moving Visualization of Selenium testing
A moving visualization showing a Selenium suite running in a Rails application.

I like to show this to people because I don’t think many people see it. Often the specs are run in headless mode, which means those things are happening just not on the screen. (But you’re still really doing invisibly them which is the important part.) While headless mode is much faster (and typically preferred by developers), using Selenium to control a real browser is an extraordinarily powerful tool— not for the development, but for evangelizing these techniques and spreading the good word of end-to-end-testing.

Most non-developers simply don’t even know what this is. I’ve talked to countless CEOs, product people, people who’ve worked in tech for years and have never even seen an end-to-end test be run. (They’ve literally never witnessed with their own eyes what I’ve shown you in the animated GIF above.)

What these people don’t even understand is that TDD, and end-to-end testing, is a practice of writing a web application development that is itself an advancement. The advancement facilitates a more rapid development process, less code debt, and a lower cost of change.

Without having actually witnessed the test runner run against the browser, it is shocking to me how many people in positions of authority are happy to hire teams of QA people to do manual testing for every new feature or release. (Disparigingly called “monkey testing” by the code-testing community.) With the easy and “inexpensive” availability of remote QA people, an industry of people are happy to keep monkey testing until judgment day. What they don’t know is that those of us who are code-testing are already in the promised land of sweet milk and honey.

My biggest disappointment personally moving from Rails to the Javascript world (Vue, Ember, Angular, React) is the lack of end-to-end-testing in Javascript. It’s not that JSers don’t ever do end-to-end testing— it’s that it might not be possible in your setup or your team.

If you are only working on the frontend, by definition you don’t have access to the database or the backend.

The fundamental issue with the shift away from Rails monoliths and towards microservices is: How are these apps tested?

I don’t know about you, but after years of being a user of microservices, I’m not entirely sold.

Don’t get me wrong: I am not categorically opposed to microservices. (Your database, and Redis, both probably already in your app, could be thought of as microservices and they work very well for us Rails developers.)

But designing applications around microservices is a paradigm ideal for huge conglomerate platforms that simultaneously want to track youshow you ads, and curate massive amounts of content using algorithms.

Most apps aren’t Facebook. I hypothesize that the great apps of the 2020s and 2030s won’t be like Facebook either.

That’s why having the power to do database migrations without involving “a DBA” (or a separate database team), or having to get the change through a backend team— something which is normal for smaller startups and Rails — has been so powerful for the last 15 years.

The social media companies are well poised for leveraging microservices, but most small-medium (even large) Rails apps are not, and here’s why: Doing end-to-end testing with a suite of microservices is a huge headache.

It’s a lot of extra work, and because it’s so hard many developers just don’t do it. Instead, they fall back lazily to their unit testing and run their test coverage reports and say they have tested code. What? The API sent a field to the React Native app that it couldn’t understand so there’s a bug?

Oh well, that was the React Native developer’s problem. OR, that was the services layer problem.

It’s a slow, creeping NIMBY (not-in-my-backyard) or NIH (not-invented-here) kind of psychology that I see more and more as I learn about segregated, siloed teams where there’s a backend in Rails, a frontend in React or another JS framework, and a mobile app — all written by segregated, separated teams who need to have product managers coordinate changes between them.

Already we see lots of major companies with websites made up of thousands of microservices. I don’t think our web is better because of it: for me, most of my experience using these websites is spinning and waiting to load. Every interaction feels like a mindless, aimless journey waiting for the widget to load the next set of posts to give me that dopamine-kick. Everywhere I look things kind of work, mostly, but every now and then just sort of have little half-bugs or non-responses. It’s all over Facebook, and sadly it seems like more and more of the web I use this degradation in experience quality has gotten worse and worse over the last few years.

It’s a disaster. I blame microservices.

I hear about everybody rushing into mobile development or Node development or re-writing it all in React and I just wonder: Where are the lessons learned by the Rubyists of the last 15 years?

Does anyone care about end-to-end-testing anymore? I predict the shortsightedness will be short-lived, and that testing will see a resurgence of the importance of popularity in the 2020s.

I don’t know where the web or software will go next, but I do know that end-to-end testing, as pioneered by Selenium in the last 10 years, is one of the most significant stories to happen to software development. There will always be CEOs who say they don’t care about tests. Don’t listen to them (also, don’t work for and don’t fund them). Keep testing and carry on.

[Disclaimer: The conjecture made herein should be thought of in the context of web application development, specifically modern Javascript apps. I wouldn’t presume to make generalizations about other kinds of software development but I know that testing is a big deal in other software development too.]


Universal Track Manager 0.7.1 Released

Rubygems says my Rails engine (a gem) Universal Track Manager has crossed the 5000 downloads milestone. Interested in tracking page visits in a Rails app?

Check out the newly upgraded SHA that makes querying for ad campaigns super-fast.

This is quickly approaching a very solid, robust solution for tracking ad campaigns in Rails.

More thanks to Seth Strumph from New Zealand for this work on version 0.7.1!

Tech Industry

A “Free Hand” At the Bagel Shop (and, On Software Project Estimation)

Although it doesn’t seem very related to software project estimation, there’s a bagel shop I go to where the staff does something uneventful but effective after they take my order. Once written down, the cashier yells out “free hand.” The first time I heard it I wasn’t sure if I was missing something, if she was talking to me perhaps, or if there were a secret language of bagel staff employees that I didn’t know about.

At this small, family-owned bagel & eggs deli in the Brooklyn neighborhood where I live (La Bagel Delightshown here), I will get a bacon, egg & cheese on a plain bagel. This event — predictable— isn’t something you’d normally notice or mention. But this bit about them saying “free hand” reminds me of scrum software.

Picture 3 bagel workers and a line of 10 customers. Each order takes maybe 1 minute to 4 minutes, depending on whether it involves toasting, eggs, or other preparation. In this bagel shop, no worker is specialized— that is, there’s no one dedicated person to one kind of sandwich preparation. All of the workers take each new job indiscriminately. (We might call this a “homogenous” team, and no, we’re not talking about them being gay.)

You’d think maybe the workers could “double-up” the jobs: take 2 or 3 jobs and run them concurrently. For example, a customer with many sandwiches in an order. It could take upwards of several minutes to complete this order.

In the meantime, if the worker has a lot of extra time while they are waiting for the bread to toast, they might come back to the queue to take the next job. In this scenario— I find myself empathizing— I would imagine the worker must make his own queue — that is, a queue within a queue.

His queue is: 1) the first order which is toasting (last time we checked), and 2) the new order he just picked up. Interestingly, like software project estimation in some ways, I imagine preparing bagels (toasting, buttering, making eggs, slicing cheese, layering with topping, etc) is similar. The worker must manage several wait states. These are times when he or she must stop and wait for something or someone else. In both arenas, the length of time it takes to complete some parts of the task will be a fixed length of time (like the time it takes to toast). Other parts will take a length of time proportional to the size of the request. Still other parts will take an unusual, unexpectedly long time. A “snag”—for example, what happens when the egg salad is not made? If the egg salad isn’t made, the customer might have to wait up to 10 minutes. Who’s gonna make this egg salad? Put a pin in that and I’ll get back to that question in a second. Other than the fixed costs and the unusually long costs (both can create wait states) what else is there?

How does this apply to Software Project Estimation

In software project estimation, Excluding the unknowns (those wait states and unusual unexpected snags), one can reasonably say the length of time or level of effort to create a bagel (or piece of software) will be proportional to the size of the request. (Number of toppings or features.)

In software, we face these kinds of things too. The build time for a compiled program, for example, can be thought of as fixed (typically). For a software project with code tests, the length of time to run the tests (like continuous integration) can be thought of as fixed time costs too.

Hopefully, your software development doesn’t have unusually long costs. I’ll bet the guy working at the bagel shop doesn’t want those either. You see, the egg salad not being made is analogous to the comps or design specs not being prepared for a highly visual UX. Or worse, the design specs being made but the feature set being ill-defined.

Wait States in Software Projects

When a developer has a “free hand,” that means they have time and attention to pay attention to your problem or the next problem on the backlog. That problem — which really is the company’s problem— should be ready-to-go (without blockers, back & forth, etc). This way, the story (software development) can move through the queue as quickly as at a bagel shop like La Bagel Delight.

That’s why a good bagel store manager and a good software product manager remove blockers. The bagel store manager notices when the egg salad is low and makes more. The product manager foresees the blocker the software developer will have and removes it before it becomes a blocker.

“Free hand” is what the cashier calls out to ask if there is an available resource to take the next job. It’s a signal of the establishment’s demand and of the queue moving.

It turns out, that while it’s easy to suspect that like in the bagel shop, a ‘free hand’ can take 2 or 3 jobs at once, this is often a pitfall.

Why? Think of the whole system as a machine. If each cog has to manage its own internal queue of wait states, you will create a lot of task switching.

Task switching is your enemy!

Having lots of “single queued” developers who switch tasks all day long is, fundamentally, anti-scrum. By doing this, the product manager creates muri (Japanese for “waste of overburden”).

An efficient system is the only way to scale up. In both software development and a small food store, we see common elements:

  • Jobs come in one by one (or sometimes many orders from one individual)
  • Each job takes a varying amount of time to complete

Although it is tempting to manage this scrum-within-scrum, it is the path to hell. The reason for this is more obvious in software than it is at the bagel shop. There’s no good reason for long wait states. It’s best when a story isn’t ready for the developers to put the story back onto the backlog (or someone else’s backlog) and move onto the next one.

If you take one thing away from this article it is that you should reduce muda (Japanse for “value-reducing waste”) by eliminating blockers as quickly as possible. In this way, you will bring the work back into the main branch (in Git terminology) in a quick, iterative fashion. Ideally, your development work is deployed to production as iteratively and quickly as possible, too. That’s how you know you have a clear definition of done.

Remember, always have a quick stand-up at regular intervals. Stand-up is always about 1) What you accomplished yesterday, 2) What you’re working on today, and 3) Anything that is blocking you. Most importantly, a ritual stand-up is when blockers can be removed. (Remember, stand-up is not a management meeting, which I wrote a post about last year.)

Be like the bagel shop and always look for the team’s greatest need: Is the egg salad low? Let’s make some egg salad. If not, maybe go to the front of the queue and be the ‘free hand’ that will take the next job.

Tech Industry

The One About the Chickens and The Pigs (aka What Stand-up Is and What Stand-up Isn’t)

There’s an old adage in scrum software development about chickens and pigs at stand-up. Chickens are product managers and pigs are developers.

You don’t hear it too often anymore, probably because these days it feels a little sexist. (It’s not lost on anyone anymore how gendered the roles of product manager and developer feel in most tech companies— the product people being women and the developers being men.)

It takes a leap of faith to understand what it means, and what even is the question it’s asking anyway.

The question is fairly basic: Who participates at your morning company or engineering stand-up?

That is, I mean, really: who speaks and who does not speak. I know it sounds rigid and those of us to talk about it get called a “scrum bullwhip” (a title I proudly wear). Pigs speak at stand-up. Chickens (product managers, CEOs, and stake-holders), if they come to stand-up (and generally only product managers should) aren’t supposed to speak unless spoken to.

What? It sounds like some kind of renaissance classism like people used to say: “children shouldn’t speak until spoken to,” but to understand the chicken & pig adage is to learn something core about scrum and the stand-up meeting itself.

  • Standup is about managing the work, not the people.

What the F does this have to do with chickens and pigs, you might be asking? (I warned you it’s a long way around with this one.) Well, the idea is that we’re making breakfast. We’re all making breakfast together.

The end result is the breakfast. How we get there matters, but not everybody’s contribution is equal.

The scrum process forces the engineers to prioritize working on the very most important thing first (hopefully, the one task they have assigned to them).

(It turns out that this metaphor was created by Ken Schwaber in the early days of scrum, but it was officialy removed from the Scrum guide in 2011 because of many of the reasons I’ve discussed here.)

Most product people, stakeholders, and CEOs being unfamiliar with the concept of “stand-up,” incorrectly assume or treat engineering stand up as a “management meeting” and think it’s their opportunity to talk or get what they want.

Sadly, this is, in fact, the opposite of scrum. Instead, scrum is about aligning your engineering efforts with your organizational-wide goals.

These days many of the millennials, born of the gadget generation, have grown in jobs where they can hide their high-functioning adult ADHD (Attention Deficit Hyperactive Disorder).

A high-performing engineering team works in total contrast to this ADHD, attention-switching, always-on-call mentality: The thing to work on is the one thing right in front of you, never anything else.

If that thing that you’re working on isn’t the most important thing, then the CEO or product owners haven’t correctly prioritized the backlog. When product people and CEOs come to scrum and participate it’s like a group of people trying to make breakfast when some other people trying to plan for lunch or dinner or tomorrow’s meals. The appropriate response you’ll get from the developers is: “Hey, back off, we’re making breakfast now, come back when we’re done and we’ll talk about lunch.”

The chickens lay eggs. The pigs are slaughtered. After breakfast is made, the chickens are still alive.

It’s a grotesque metaphor and one that can even be insulting to product people because it makes them feel like their contribution isn’t valuable. Well, that’s part of the crux of it too:

It isn’t that the product development contribution as a chicken (product owner) isn’t valuable, it’s that software development is a moving train.

As a developer, so that I can achieve flow, I should have the materials needed to do the ticket (story) I’m working on without a lot of back and forth with the stakeholder.

In fact, the correct amount of back and forth with the stakeholder is 0 (zero).

Each and every back and forth costs wait states — that is, times when the flow of the craft (that is, building the software) has to wait for someone else in the chain. If this is you then your process is most definitely held back by wait states.

What does this have to do with chickens not speaking at standup? It’s not that chickens aren’t actually supposed to literally be quiet, it means that they don’t have a turn when you go around each “giving” your stand-up.

Why don’t chickens have a turn? Because stand-up is about 1) what code we accomplished yesterday, 2) what we’re working on today, and 3) removing blockers.

The chickens don’t actually accomplish coding tasks. They contribute to the coding tasks (things like wireframes, mockups, designs, written user stories, business cases)— these are called artifacts. But these artifacts, although they can help the process, aren’t actually the finished result of working production-quality code. (Except, arguably, in the case of web designs where the designs are translated into working code.)

It’s a really old, sexist, and out-dated adage that comes from the 90s and in 2020 it’s probably insulting to most.

I haven’t yet thought of a good replacement, because the core of the adage (which I admit is kind of nonsense on many levels when you really try to lay it all out) is about the fact that the production of the code is what matters. Or, if you will, the end result (which in software development is working code.)

Scrum assumes and prioritizes high-performance engagement. At the same time, it shines a light on low-performing tools, processes, and people. It is the “sunlight” that will disinfect any broken engineering process.

It ain’t easy, and it ain’t for everyone, but when practiced right, it remains the most engaged and accelerated form of software discipline today.


Getting Stuck on the Version of Rails In Your Bundler

Confusingly, bundler can have more than one version of Rails installed at once. if you had many versions, when you ran rails new, it probably used the default one, which could have been a very old one for you. This often confuses new developers, and especially if you installed Rails years ago and then come back to pick it up again.

To see which versions of Rails you have installed in bundler, use

gem list |grep rails

(here you are grepping, or searching against, the output for the string “rails”; without grep you would see all of your gems)

You’ll see some other gems with the name “rails” in them too, fortunately, all the rails gems are numbered concurrently.

TO install a different version of Rails in your bundler (remember, this just installs the gem code in your bundler’s system ready for use)

gem install rails -v

Finally, if you want to force rails new to use a specific version, use underscores before the “new” (that is, between “rails” and “new”)

rails _6.0.3.2_ new


Common Core JS Quick Setup

Common Core JS am pleased to announce a rapid application development tool for Rails: It’s called “Common Core JS.”

The source code is on Github. When you install it, the gem is pulled from RubyGems.

The finished result of the example app we will build today can be seen here. (I recommend you type out each step and only refer to this if you need to.)

It’s based on the idea that you will embrace a common naming convention for your Rails app, will use AJAX rendered-everything (that means all the forms are remote: true), and want a common set of tools for rolling out a quick, dashboard-like app in Rails.

It comes with poor-man’s authentication built-in. Poor man’s auth is safe and works fine, but it isn’t designed for you to grow your entire app on and you should graduate out of if you have any granularity to your access control.

Common Core is a fantastic tool for rapidly building prototypes. It can also be used to create API back-ends for Javascript-heavy apps.

It is a blunt instrument and you should wield it carefully. In particular, you’ll want to remove specific code that it creates because it can create situations where users may have the access they shouldn’t.

Quick Setup

Let’s build the fastest app you can possibly build in Rails.

I’ll call it MyGreatApp, but yours can be named anything of coruse.

rails new MyGreatApp

Once you have the app generated (and even on faster processors Rails new still takes a hot minute or two), go to modify the Gemfile

Add the Gem

gem 'common_core_js'

To your Gemfile.

Then run: bundle install

Setup The Common Core

Then run the Common core install generator (which is implemented as a generator, not a rake task)

bundle exec rails generate common_core:install

Now you’ll create a User objeect, with a migration and model.

do this using. Note that you should not add any fields that will conflict with Devise fields, like email (I’ll add those in the next step).

bundle exec rails generate model User name:string joined_date:date timezone:integer

Add Devise

Add devise to your Gemfile

gem 'devise'

run bundle install, then install devise on the User model:

rails generate devise:install
rails generate devise User

Take a look at the fields on your User database table now. (Here I’m using DB Browser for SQLite)

Notice the fields added name, joined_date, and timezone I’ve put a red box around and the devise fields, including email and encrypted_password, I’ve put a blue box around.

Now you have the most bare-bones Rails app with Devise gem installed. This is great because you get free sign up, login and forgot password fundionality immediately.

Go to

Add jQuery

Add jQuery to your package.json via yarn

yarn add jquery

Go to config/webpack/environment.js and add this code in between the existing two lines

const webpack = require('webpack')
  new webpack.ProvidePlugin({
    $: 'jquery/src/jquery',
    jQuery: 'jquery/src/jquery'

The complete environment.js file looks like so (the part you are adding is shown in red.)

const { environment } = require('@rails/webpacker')

const webpack = require('webpack')
  new webpack.ProvidePlugin({
    $: 'jquery/src/jquery',
    jQuery: 'jquery/src/jquery'

module.exports = environment

Add require("jquery") to your app/javascript/packs/application.js file so it looks like so.

While we are here, let’s also add the common_core javascript too using require("common_core")

The change code is shown in red.


Adding Bootstrap

Next let’s add Boostrap to the Gemfile

gem 'bootstrap', '~> 4'
gem 'font-awesome-rails'

Next delete app/assets/stylesheets/application.css

And replace it completely with a new file


(Do not save the old application.css file or rename and append to it; do not include the contents from the old file in the new .scss file)

@import 'bootstrap';
@import 'font-awesome';
@import 'common_core';

Test it

Now go ahead and start your rails server with bundle exec rails server

Go to /users/sign_up and create a new user account (be sure to enter the same password twice)

You will then be redirected to the Home page, which is now the Ruby on Rails “Yay you’re on Rails.” That’s fine— go ahead and customize your root URL.

Today’s Example App

I’ll make a super-simple system today where Users have many Events An Event belongs to a Format, which in our fictional world is only two choices: Zoom or Outdoor.

Events have a name, a starting datetime, and an ending datetime, and a “publicize” date on which should be before the starting time.

The two datetime fields (starting_at and ending_at) and the date field (publicize_on) can be empty (nil), but if set they are enforced to be: starting_at must be before ending_at. publicize_on must be before starting_at.

An Event will belong_to a Format (class & table). Formats will have a name field and only two records: Zoom and Outdoor. (We can assume they will be id 1 and id 2 in the formats table.)

All Events must belong_to a format, and when we create or edit an Event we can switch its format, but the format cannot be blank (null).

We already have the User object, and we also already have all the login, logout forgot password, and more provided by Devise.

We want the users to log-in and go to a dashboard of their own events.

They should be able to create, edit, & delete their own events with only the validations discussed above.

They should not be able to edit or create events belonging to other users, even by hacking the query parameters.

Finally, the user should be able to edit their own name and timezone, but not any other user’s name or timezone.

Because we want to name our routes from the perspective of the context-interaction, we’ll namespace the controller to Dashboard:: in our Ruby code and /dashboard in the URL. The controller will be at controllers/dashboard and the views will be at views/dashboard

Make the models

Since you already made the User model in the devise setup, let’s go ahead and create the Events and Formats tables.


bundle exec rails generate model Event user_id:integer name:string start_at:datetime end_at:datetime promote_on:date description:string format_id:integer

Then open up the migration file and ed the description line, adding a larger than 256 limit, like 400

Next run bundle exec rake db:migrate to create the table.

Before I go further, let’s edit our models just a bit.

Open models/user.rb and add has_many :events

also add validates_presence_of :name

class User < ApplicationRecord
  # Include default devise modules. Others available are:
  # :confirmable, :lockable, :timeoutable, :trackable and :omniauthable
  devise :database_authenticatable, :registerable,
         :recoverable, :rememberable, :validatable

  has_many :events
  validates_presence_of :name

Likewise, on the Event object, defined models/event.rb, you’ll need to add the reflexive relationship for belongs_to :user and belongs_to :format

class Event < ApplicationRecord
  belongs_to :user
  belongs_to :format

Now make a formats table

bundle exec rails generate model Format name:string

      invoke  active_record
      create    db/migrate/20200808233939_create_formats.rb
      create    app/models/format.rb
      invoke    test_unit
      create      test/models/format_test.rb
      create      test/fixtures/formats.yml

Modify the migration file to create to dummy Formats, adding this to after end of the create_table block

Because the COVID quarantine prohibits indoor events during 2020, we want to make only two format records: “Zoom” and “Outdoor” events.

Format.create(name: "Outdoor")
Format.create(name: "Zoom")

Your edited migration looks like

class CreateFormats < ActiveRecord::Migration[6.0]
  def change
    create_table :formats do |t|
      t.string :name

    Format.create(name: "Outdoor")
    Format.create(name: "Zoom")

Then run the migration itself, which will now make the table & the two format records.

bundle exec rails db:migrate

-- create_table(:formats)
   -> 0.0025s
== 20200808233939 CreateFormats: migrated (0.0027s) ===========================

Now we have two Formats in our database.

Localized Timezone Support

In order to show dates, we use a localized date display: the date & time is always shown to the user in their own date. To do this you have a few choices: (1) You can save the timezone to the user’s table, and let them set it for themselves, (2) You can show everybody the server’s date

Option #1 – Store timezone On the User object

We already took care of this by adding timezone to our User object.

Option #2 – Use the Server’s Timezone

If the auth object (current_user) does not respond to timezone, the Rails “system clock” will be used. The system clock’s timezone as set by the Rails app is used. This is often the timezone of the headquarter’s of the company that owns the application. (That is, if you do not know the user’s context, you simply use your own company’s context instead.)

Make the Controller, Views & Specs

Next we’re going to do the thing. We’ll make two controllers: Dashboard::EventsController for editing events and Dashboard::UsersController for the user editing their own name.

First, let’s create the Events Controller

rails generate common_core:scaffold Event namespace=dashboard --with-index

A few things to note

  1. Use the ‘generate’ command, not a rake task.
  2. When passing the model name, pass it in the singular form
  3. Here I’ve provided the namespace= with a value of dashboard. You will this is important to how our code comes out.

Here is the heart & soul of the common core: 5 .js.erb files, 5 .haml files, a controller, and a controller spec. Also along for the ride came controllers/dashboard/base_controller.rb as well as views/dashboard/_errors.haml, and layouts/_flash_notices.haml

Take a peak through all the generated code now.

Pay particular attention to _line.haml and _form.haml. You will note _form.haml conveniently is used for both the new/create actions and also for the update action, unifying the layout of your record across CRUD.

You can use both to customize your app quickly and easily.

One more quick step, add this to your routes.rb file.

Make sure to nest the :events route within the :dashboard namespace, as shown here. If you aren’t familiar with namespacing in Rails, check out this blog post on my other blog, The Rails Coach.

Rails.application.routes.draw do
  devise_for :users

  namespace :dashboard do
    resources :events

Start your server with

bundle exec rails server

If the Common Core finds null for timezone on your User object, it will default to either (1) whatever is set for your Rails app in either application.rb or an environment file, or, if don’t have this set (2) the clock timezone of the server that is running your Ruby application.

It’s generally a good idea to set your Rails app timezone to the same timezone of your company’s headquarters, and then don’t change it, because that way if your server happens to move from one timezone to another (for example, you migrate from a server on the East coast to the West coast), your app will be unaffected. If your company changes timezones, you can either leave the Rails app as-is or change it, but be sure to note any place where your default timezone comes through.

config.time_zone = 'Eastern Time (US & Canada)'


We can now do all of these fancy thigns.

Create an event. Leave name or format blank, get an error.

When you create a new event, you must give it a name and Format. Notice how if you don’t, the Rails-side logic will return the form shown with the erroneous fields marked in red.

Edit an Event

Your model-level validations — name and format as required — are enforced in the update action as well.

Deleting An Event

Adding Validation

Add this to your Event class in app/models/event.rb

validate :start_at_before_end_at, if: -> {!start_at.nil? && !end_at.nil?}

def start_at_before_end_at
  if end_at < start_at
    errors.add(:start_at, "can't be after end at")
    errors.add(:end_at, "can't be before start at")

*Validation magic*

Finally, to add validation on all the date fields, here’s our completed Event model

class Event < ApplicationRecord

  belongs_to :user
  belongs_to :format

  validates_presence_of :name

  validate :start_at_before_end_at, if: -> {!start_at.nil? && !end_at.nil?}
  validate :promote_on_before_start_at, if: -> {!promote_on.nil? && !start_at.nil?}

  def start_at_before_end_at
    if end_at < start_at
      errors.add(:start_at, "can't be after end at")
      errors.add(:end_at, "can't be before start at")

  def promote_on_before_start_at
    if start_at < promote_on
      errors.add(:promote_on, "can't be after start at")

Account Dashboard

Next we’re going to create the very simplest of Account Dashboards. Remember that Devise already handles log in, log out, and forgot password, meaning most of the heavy lifting of user authentication has been taken care of.

In this simple app, we want an Account dashboard that let’s us edit only two fields: name and timezone.

First let’s add the route to routes.rb

Rails.application.routes.draw do
  devise_for :users
  # For details on the DSL available within this file, see

  namespace :dashboard do
    resources :events
    resources :users

Next let’s generate some scaffold

rails generate common_core:scaffold User namespace=dashboard

We now instantaly have a very basic dashboard for the User to edit their own details

The final finishing touch here will be to make the Timezone into a drop-down.

To do this, we’ll create a non-AR model :

class UsTimezone
-5 => 'Eastern',
-6 => 'Central',
-7 => 'Mountain',
-8 => 'Pacific',
-10 =>'Hawaii–Aleutian'
def self.all
@@_US_TIMEZONES.collect{|k,v|{label: v, value: k})}

def self.utc_to_name(input) # in hours
utc = input[0...-2].to_i
return @@_US_TIMEZONES[utc]

Next go into the views/dashboard/users/_form.haml and we’re going to make our first cutomization

We’re going to add this:

= f.collection_select(:timezone, UsTimezone.all, :value, :label,  {:prompt => true, value: @user.try(:timezone) }, class: 'form-control')

The full file looks like this

  %div{class: "form-group col-md-4 #{'alert-danger' if user.errors.details.keys.include?(:name)}"}
    = f.text_field :name, value:, size: 256, class: 'form-control', type: ''

  %div{class: "form-group col-md-4 #{'alert-danger' if user.errors.details.keys.include?(:joined_date)}"}
    = date_field_localized(f, :joined_date, @user.joined_date, 'Joined date', current_user.timezone)
  %div{class: "form-group col-md-4 #{'alert-danger' if user.errors.details.keys.include?(:timezone)}"}
    = f.text_field :timezone, value: @user.timezone, size: 256, class: 'form-control', type: ''
  %div{class: "form-group col-md-4 #{'alert-danger' if user.errors.details.keys.include?(:email)}"}
    = f.text_field :email, value:, size: 256, class: 'form-control', type: ''

First, take away the strikethrough text above. This is the text field for the timezone that we don’t want.

In its place, add the new collection_select

  %div{class: "form-group col-md-4 #{'alert-danger' if user.errors.details.keys.include?(:timezone)}"}
    = f.collection_select(:timezone, UsTimezone.all, :value, :label,  {:prompt => true, value: @user.try(:timezone) }, class: 'form-control')

We now have a nice drop-down for our Timezone field. You can replicate this pattern for any field that you want to turn into a drop-down.


Common Core JS harnesses the power of many great things about Rails:

• Database migrations

• ActiveRecord assocations (has_many, belongs_to, etc)

• Scope chains for access control

• Devise for athentication

Remember, make your models first: Add limits and defaults to your database fields by modifying your migrations. Then add the relasionships between the tables using standard ActiveRercord has_many, belongs_to, and has_one.

Then build the common core scaffolding & customize the views and controllers it produces.

With these powerful tools, you can build a dashboard-like app in minutes, complete with simple interface buttons that let your users accomplish most of what they’ll need. The philosophy is that you will want this dashboard as you initially introduce people to your product. The main logic of your application will likely live more in the models, service objects, and domain layer (business logic) parts of your Rails app. For this reason, you are encouraged to customize the files only lightly. (Add some verbiage or change the CSS to customize the look & feel.)

The code you build with common core is cheap and disposable. It is not very modern, but it gets the job done. It is just “good enough” to launch a sophisticated app on, but it isn’t good enough to impress your users with a really good UI.

For that, you’ll want to throw away the front-end code and replace it with a modern JS UI like React, Vue, Ember, or Angular.

By that time, the Common core will have already helped you build (1) a prototype, (2) your business logic, and (3) possibly even some controllers you can re-use (for example, remove the format.js responses and replace them with format.json to change the controllers into a JSON-responding API controller.)

Enjoy your rapid prototyping!

Programming Tools

Remember how strftime works in Ruby? Neither do I.

For a Good Strftime

Fancy little website to help you create the stftime syntax for Ruby.


Google Analytics Part 2 (#31)

Today I’ll explore three core parts of Google Analytics: Audience, Acquisition, Behavior.

If you are setting up GA on your website, start with yesterday’s post: Google Analytics Part 1.

Remember, this broad overview will cover GA only in large brush strokes. I hope to introduce you to the basic concepts in web traffic analytics. After understanding these four areas, you will want to move on to learn more about building Customized Reports, Conversions for e-commerce websites, different views for different stakeholders, and the newer beta features like Attribution as well.

Always Remember the Date Picker (Filter)

The first thing to keep in mind in these three areas of GA is the date selector. Remember that the date selector will always default to showing you the last 7 days through yesterday (that is, eight days ago through yesterday) whenever you open GA.

However, if you make a change to the selector, then switch tabs, you will be taken to the new tab but your date selection will persist. That is, the date you selected will be used to show you the data in the tab you switched into.

This date picker applies to all of these areas we’ll cover today: Audience, Acquisition, Behavior, and more of what you will see in GA too, so always keep the date picker in mind when switching between parts of GA.

The date picker gets a little used to. When you first disclose it, keep in mind you are picking both a starting date and ending date. What is confusing is that there’s two little boxes to the right (see below). Be sure to make sure either starting date or ending date is selected (blue), like so:

What’s confusing about this date picker is that it’s easy to confuse yourself between whether you are picking the starting date or the ending date, especially if want to change only the ending date.

You then choose a date on the calendar on the left. (After you pick the starting date, the date picker will switch to let you pick ending date.) Click Apply to apply the new date range to whatever you are currently looking at. The date selector is global to (almost) all of the views in GA (with the Home and Realtime tab as exceptions).

Click apply to choose this date range.


In the Audience section, GA is concerned with showing you who the visitors are. Here we see users, pageviews, and sessions, as well as pages per session (“pages / session”), the bounce rate, and demographics. The demographics include geographic region (GA will geolocate people by their IP address), their browser & operating system, service provider if they are mobile or desktop users and the language that is set in their browser.

All of this information is gleaned “magically” from the user’s web browser and IP address. What happens under the hood is that GA collects all of this in the user’s browser and then sends it back anonymously to GA. The anonymously part is important because GA is a tool that has an opinionated take on tracking identifiable data: don’t do it. (At the very least, don’t do it in GA.)


The acquisition tab is concerned with where the people came from:

Let’s review the fundamental terminology in traffic analysis.

Organic Search — People searched on searchn engines and then found your site through an organic search result.

Direct — People typed your URL into their web browser.

Referral — People clicked a link from another website that was not a social network and then and came via that website to your website.

Social — People clicked links or were directed from a social network (Facebook, Twitter, Instagram, LinkedIn, etc)

At first, these are the only ones you will see. Over time, if you start advertising, you will also see Display and Paid Search. Display refers to people who came from a display ad on the internet— i.e., a banner, sidebar, or paid placement. And finally Paid Search refers to an ad that you paid to the search engine to display alongside a search. (These are marked in all search engines as “Advertisement.” Sometimes they appear on the side of the search results and sometimes they appear deceptively within the search results. Either way, the merchant— you— is paying the search engine for that placement. )

Here we see a fictitious view of web traffic for a site with thousands of visitors.

Take a closer look at the pie chart that breaks this down. Remember, the data is always displayed to you using the date selection you indicate.

Here we see that during the date selected, this site had 35.7% of its traffic from Paid search advertising, 225 from organic, and so on. As you can see, you can hover over any of these in the pie chart to reveal the details of its numbers.

The Acquisition section is also where you’re going to connect your Google AdWords account if you advertise on Google. The Acquision area also has views into Social traffic and Search traffic, both essential for understanding where you leads are coming from.


On the Behavior tab GA is concerned with behavioral things they can track about your visitors; sessions per user, session duration, and flow.

The powerful Behavior Flow view shows you a chart of people first, second, third, and so on, pages. That is, where do most people start? From there, where do they go next?

Take a look at this fancy multi-dimensional charge. We see that most people start at my blog on the home page (/). From there you can see where most people go next, broken out into the different pathways people traverse my site.

Take special note of this Behavior Flow screen, which is telling you where your users most commonly start, then go to next, and so on.

This page will make more sense to you if you have a lot of visitors and relatively few pages. For example, your landing pages will always show up first, and then the “where people visit next” question can be used to understand the psychology of what people are clicking on (specifically, the calls to action on your website).

Those red lines you see to the right of green page visits indicate where people don’t continue to another page. These are called drop-offs. If you see a page with a high amount of drop-offs, think about A) whether the content on that page is repelling people from the sales proposition and/or B) if that page has calls-to-action that lead your customer to the checkout.

As you can see GA is an incredibly powerful tool. I hope you’ve enjoyed this brief introduction and that GA helps you get oriented to your site traffic.


Google Analytics Part 1 (#30)

I’m ending this month-long series with a two-part special about Google Analytics.

Today I present part one. Everyone who owns a domain name needs to know about Google Analytics (or “GA” for short). GA is a free tool that is available to pretty much every website. (The one caveat to this is large websites experience slower processing times of some data, which sometimes delays your data for about 24-48 hours. You can still see data in realtime on the Realtime tab, but other data come through after the fact. If that’s you, getting real-time in your reports can be achieved by upgrading to a paid product called “GA 360”).

When I say everyone who owns a domain I mean you: If you have a domain (or set up a new one) always remember to set up your Google Analytics right away. Google collects information about your site traffic as soon as it is installed. However, it does not know anything about the site traffic from before it was installed.

Always install Google Analytics right away on new websites and domains.

— Jason Fleetwood-Boldt

Today I’m going to cover a basic set of what might seem like boring operational stuff that you should start with up-front.

We’ll mostly be in the Settings area, but you must know these settings are available to understand what you will be looking at when you get to the analytics.

Accounts, Properties, and Views

An Account is a Google account, always associated with an email, like or any other domain name that is using G-Suite For Business (Google’s paid business tools.)

A Property is your website or app. You should create a unique property for each project, however, if you have subdomains, you have a choice of whether or not to have multiple subdomains in the same property or to create a new property for each subdomain.

A View is a “reporting view” — it says so specifically when you create one. For the purpose of this series, I will cover reporting views only lightly at the end of tomorrow’s post. A view is a special way to look at the data collected— you can set unique filters, e-commerce funnels, attribution models, and more. Views are best used by different stakeholders in your organization: i.e., the marketing team gets a separate view from the accounting team because they rely on different filters, funnels, or attribution models, or and they care about different data points. (Or, more accurately, they analyze the data differently.)

Add A Property

Your first choice is between:

• Web

• App

• App & Web

For the purpose of measuring websites, we’ll choose Website.

Next, you enter your fully qualified domain name. (That’s the domain where your web server primarily operates. If your website or server redirects users from, for example, to, then you operate at what is known as the apex of your domain. This simply means it has no subdomain and no “www.” in front of it.) If your website does operate at www., however, you ‘ll want to include it here.

What you don’t include is the http:// or https:// part. I highly recommend you use https, which requires correctly installing/configuring an SSL/TLS certificate (SSL and TLS are technologies from separate eras: the old SSL standard is being phased out because of two vulnerabilities known as Heartbleed and Poodle. Most of the time you see “SSL” on the internet, it refers to a technical concept that implements both of SSL, the old standard, and TLS, the newer standard.)

In the old days, an SSL certificate was optional if you didn’t have a checkout, collect a payment, or a login area (for example, a publish-only new site or blog.) However, in 2014, Google publicized a movement called “Always SSL” which effectively means that all websites should now have SSL/TLS certificates and that all web websites should redirect traffic from HTTP to HTTPS immediately if they visit the non-secure site. Not only should websites have SSL/TLS, but doing so actually gives you a small preferential treatment in Google’s search algorithm. (Although Google’s algorithm is proprietary and highly secretive — and changes — I’ve heard from experts the presence of https on your website counts approximately 1% of the total picture of all the things Google uses to rank your website. This number is not insignificant but also it is not monumental. That is, while everyone should put SSL/TLS on your domain the lack of it will marginally but not dramatically affect your search ranking.)

To do this, you need to configure an SSL certificate which is beyond the scope of this post.

Be sure to choose “http://” or “https://” below when creating your GA property.

Then you come to the screen where you can first get the GA tracking code & find the GA Tracking ID.

All GA tracking ids begin with UA-

They follow a format:


The Xs will always represent a number associated with the account where this property was created. The Y number will be sequential: the first will be 1, the second 2, the third 3, and so on.

You will want to copy & paste this code into your “site editor.” Look for a place where you can modify the header, also known as the content inside of the <head> </head> tag section.

If you are working with a content management system, you will want to find where you can modify the <head> tag. Look for this:


Usually, your <head> tag will already contain much content: Do not modify this. Instead, insert the GA code, copied & pasted directly out of the GA interface box above.

Verifying GA On Your Website

When you’ve made this change live on your website, you now need to install a Google Chrome extension called GA Debug.

Once you install and activate the extension, visit your website.

Look for this icon in your Chrome window:

First, hover over the GA Debug icon, but don’t click it.

If as you hover over it you see “GA Debug: OFF” (as shown above), then click on the icon once to turn it on.

When GA Debug is on, the icon changes to blue-ish, like so:

Now that it is on, open the Chrome debug console under View > Developer > Developer Tools. (Or Option-Command-I).

Once you are in the Console pane, you’re going to want to go back to your main browser window and go to your website. Then, you’ll want to quickly switch (as it loads) back into the debugging console window in the background to look for the giant “text-art” that spells out “google analytics.” It looks like this:

Look for a line that says “Initializing Google Analytics” followed by some code-like output that contains your GA property ID:

If you don’t see this, go back and confirm that the GA script has been correctly added to the <head> tag of your website and confirm that this code change has been published and is live on your website.

Before we leave the Properties settings, be sure to note some more features in the Properties area of the Admin settings:

Excluding URL Parameters

Important: If you are advertising on Facebook, you must do this

Go to Settings >View > View Settings

Under “Exclude URL Query Parameters” add fbclid (as shown below) and be sure to scroll all the way down to click Save.

When you are done with settings, click on a tab in the left column (note in the settings view it is probably collapsed like you see here.)

The Home Screen

The GA Home screen shows you a few things. It is primarily designed to show you the last 7 days of traffic, with a blue box on the right that is showing you real-time traffic. (That’s how many people are on your website right now.)

In tomorrow’s post, I’ll cover a visit, session, session duration, and other terms you’ll need to be familiar with to understand what you’re looking at. For now, scroll through this page and see some of the quick insights it gives you: traffic by day over the last 7 days, how you acquire and retain users That means, what site they come from and how well you retain them, shown by cohort. For example, a week-by-week cohort, shown here, show of the people who visited Jun 21-Jun 27, how many of them first came less than 1 week ago (that’s “Week 0”). For everyone who came 1-2 weeks ago, those people are in the “Week 1” cohort.

A month-by-month cohort is typically more useful and helps you measure the quality of your traffic over time. Specifically, it lets you see if a lot of the people who were introduced to your site in a given month keep coming back: If not, perhaps you brought bad qualities leads to the website that month. (Hopefully, you use this to correct the quality of your leads, focusing only on the highest-quality traffic that wants to re-visit your website.)

You get a quick view (and preview) of the time of day your visitors visit, the geographic breakdown by country, and a breakdown by desktop, mobile and tablet.

Conveniently, the Home screen has links which take you to other parts of GA to let you explore all of these more in-depth.

I’m going to skip over Customization because it is out of scope of this basic introduction.

Next, we’ll examine the Realtime tab.


Realtime lets you see in real-time: that is, what/where/how people are visiting right now. Remember, GA knows a lot of things: The device, the IP address, the traffic source (that is, if they came from an ad or clicked from another site), and if you hook it up, events & conversations (I will go over that in tomorrow’s post.)

So realtime is showing you a lot of the data that the rest of GA show you but only for traffic right now. This is critically important if you are on the news, or are affected by cyclical or media-related events that drive people to your site all at once.

I hope you’ve enjoyed this brief introduction to Google Analytics. Tomorrow I’ll cover some key concepts in traffic analytics and also the Audience, Acquisition, Behavior, and Conversion tabs in GA.


Down For Everyone Or Just Me (#29)

Ever try a website only to have it not load and wonder to yourself, “I wonder if this is down for everyone or if it is a problem with my network?”

Because sometimes connections drop, or your DNS is somehow holding onto a change, or you have some kind of cookie or session issue with a website, to the rescue is

The beauty of Down For Everyone Or Just Me is that it form a triangle: You are accessing DFEOJM, then DFEOJM pings the real website, and then DFEOJM responds to your request.

Most of the time, if you can actually access DFEOJM, your internet connection is working OK so, in theory, you should also be able to access the website you are trying to get to. But every now and then you run into a DNS resolution related bump (that means a problem getting your domain name or subdomain to be found on the internet DNS, which kind of like a global directory.)

This tool is like having “a friend on another network” who can test to see if the website down for them too.

Also, you might remember the attack on DynDNS on October 21, 2016, which had regional effects: people on providers in North America were unable to get DNS resolution on certain domains because of a DDOS (denial of service) attack on DynDNS, which in turn affected most websites across America. (In that example, it was an attack on the intermediary SSL certificates.) Nonetheless, if the domain is experiencing some kind of regional resolution failure as was seen on 10/21/2016, DFEOJM could be used to test DNS resolution remotely.