Skip to main content

Mobile performance testing using Jmeter & Terraform - getting started

Everyone has performance and scalability on their minds these days, so I am working on a project to bring more visibility to our bottlenecks.  The project will center around simple 'user scenarios':
  • We send a huge batch of push notifications, and a subset of users will open the app.  That simple action actually does a bunch of stuff, like authentication, refreshing feeds, and so on.
  • We make a change that dumps auth, and every user has to log in fresh.
  • We add an endpoint as part of a marketing effort (say, a news page), and a push notification lets users know this new thing is a thing.  The endpoint gets 100k hits in 5m.  Does it survive?
In all these scenarios you are dealing with read-only traffic, and nothing super fancy to put together.

Nobody here really knows much about performance testing (aside from previous experiences discovering that mass effort was not worth it, which I agree with), so I wanted to make a point of not turning this into a behemoth.  What we really wanted to get out of this:
  1. Better understanding of how our apps function in real-world user scenarios
  2. Basic understanding of how our ecosystem reacts to mass user scenarios
  3. Some experience under our belts using the tooling
We (me and QA folk) sat down for an hour and dialogued, ended up diagramming a few basic user scenarios that everyone agreed on.  Then - ok, how do we reproduce this?  Well, you sniff network traffic.  Haha, seriously.  A mobile app is simply HTTP requests flying around, once it leaves your phone, and your middleware apps are just listening for that.  So to reproduce user load, you need a way to generate a lot of very similar HTTP sessions.

Jmeter was 'the thing mentioned most', both by co-workers and Google, so I started there.  After getting it running and using an old iPhone we had sitting around, I slowly quickly realized a few things:
  • Jmeter's UI is kinda funny - the icon layout to expand/collapse is not the usual + sign.  It took an embarassingly long time to figure that out.
  • Getting a cert installed on an iOS device must be done via the NATIVE mail app.
  • Firewalls.
  • Understanding 'how Jmeter works' is a bit of a learning curve, but once you get it, seems so simple!
Once you saw the traffic coming in, it was pretty cool/the best feeling ever.

I then got to google about regex extractors and regexes for filtering URLs, and once that was figured out we were in business!  Thankfully the auth piece was already something fresh in mind as we were working on our API test framework and I had just finished getting auth working.

So now we have a functional Jmeter test plan, you spec more threads, it does more stuff.  Fancy feast.

Next up, we decided that vendors were the way to go!  But what kind of performance did we need from them?  Was 1000 concurrent users really 1000 concurrent users?  Turns out, not always!  Our starting point use case was: 2000req/s with 1000 concurrent users

With our simple/fast test plan, this would apparently overwhelm CPU/network, so we would probably only get the equiv of 300 concurrent users.  This meant enterprise, and that meant a trip back to the drawing board.  We were also, handily enough, working on learning Terraform, and said - HAY, WE CAN DO THIS!

Our tech stack for this is now:
  • Jmeter - build the test plan, record device traffic, etc
  • Terraform - builds a fresh VPC et al., master node, and slave nodes
  • Ansible - configures Jmeter on the nodes
  • Teamcity - will be the trigger point to build/execute all this jazz
  • S3 - where the reporting will go
  • Slack - where the reporting links will go
The only net-new tech here is Jmeter.  Fun!

The best part!  From the basic steps of figuring out 'how to performance test our apps' we have already learned valuable things that are now tickets that are being worked on to improve things.  We aren't even done the project and already it's delivering value - the best kind of project!

You'll note that no code has been published here, and that's intentional.  The only 'new' code we'll be writing here is in Ansible roles for remote Jmeter testing.  Even that will be pretty straightforward.

My key takeaways so far:
  • Thinking about something and writing stuff down is powerful
  • Taking the effort to get a bunch of people together and talk about a pressing issue and turn it into action is worth it (vs. it continuing to be 'the thing everyone is pattering around')
  • Exploratory learning of seemingly simple things can net great results - if the results are shared!
  • Terraform is crazy and horrible and wonderful
  • Jmeter is kinda creaky but seems to 'just work'

Will post up some results in the future.

Comments

Popular posts from this blog

DFSR - eventid 4312 - replication just won't work

This warning isn't documented that well on the googles, so here's some google fodder:


You are trying to set up replication for a DFS folder (no existing replication)Source server is 2008R2, 'branch office' server is 2012R2 (I'm moving all our infra to 2012R2)You have no issues getting replication configuredYou see the DFSR folders get created on the other end, but nothing stagesFinally you get EventID 4312:
The DFS Replication service failed to get folder information when walking the file system on a journal wrap or loss recovery due to repeated sharing violations encountered on a folder. The service cannot replicate the folder and files in that folder until the sharing violation is resolved.  Additional Information:  Folder: F:\Users$\user.name\Desktop\Random Folder Name\  Replicated Folder Root: F:\Users$  File ID: {00000000-0000-0000-0000-000000000000}-v0  Replicated Folder Name: Users  Replicated Folder ID: 33F0449D-5E67-4DA1-99AC-681B5BACC7E5  Replication Group…

Fixing duplicate SPNs (service principal name)

This is a pretty handy thing to know:

SPNs are used when a specific service/daemon uses Kerberos to authenticate against AD. They map a specific service, port, and object together with this convention: class/host:port/name

If you use a computer object to auth (such as local service):
MSSQLSVC/tor-sql-01.domain.local:1433

If you use a user object to auth (such as a service account, or admin account):
MSSQLSVC/username:1433

Why do we care about duplicate SPNs? If you have two entries trying to auth using the same Kerberos ticket (I think that's right...), they will conflict, and cause errors and service failures.

To check for duplicate SPNs:
The command "setspn.exe -X

C:\Windows\system32>setspn -X
Processing entry 7
MSSQLSvc/server1.company.local:1433 is registered on these accounts:
CN=SERVER1,OU=servers,OU=resources,DC=company,DC=local
CN=SQL Admin,OU=service accounts,OU=resources,DC=company,DC=local

found 1 groups of duplicate SPNs. (truncated/sanitized)

Note that y…

Logstash to Nagios - alerting based on Windows Event ID

This took way longer than it should have to get going...so here's a config and brain dump...

Why?
You want to have a central place to analyze Windows Event/IIS/local application logs, alert off specific events, alert off specific situations.  You don't have the budget for a boxed solution.  You want pretty graphs.  You don't particularly care about individual server states.  (see rationale below - although you certainly have all the tools here to care, I haven't provided that configuration)

How?
ELK stack, OMD, NXlog agent, and Rsyslog.  The premise here is as follows:

Event generated on server into EventLogNXlog ships to Logstash inputLogstash filter adds fields and tags to specified eventsLogstash output sends to a passive Nagios service via the Nagios NSCA outputThe passive service on Nagios (Check_MK c/o OMD) does its thing w. alerting
OMD
Open Monitoring Distribution, but the real point here is Check_MK (IIRC Icinga uses this...).  It makes Nagios easy to use and main…