Sponsored by

Conference notes: Automation for Bug Hunters (Bug Bounty Talks)

Posted in Conference notes on July 25, 2018

Conference notes: Automation for Bug Hunters (Bug Bounty Talks)

Hi, these are the notes I took while watching the “Automation for Bug Hunters - Never send a human to do a machine’s job” talk given by Mohammed Diaa (@mhmdiaa) for Bug Bounty Talks.


This talk is about automation for bug hunters.

Why do we need automation?

To avoid boredom

  • Boredom is a bad thing especially when your job is to be creative & solve new problems

Boredom & drudgery are evil Hackers (and creative people in general) should never be bored or have to drudge at stupid repetitive work, because when this happens it means they aren’t doing what only they can do — solve new problems. This wastefulness hurts everybody. Therefore boredom and drudgery are not just unpleasant but actually evil.

To avoid repetitive work

  • Repetitve work wastes our time & energy. It may exhaust you away from doing what’s really worth your time

Spending too much time on recon has been a mistake I’ve done in the past. By the time I start to hunt for bugs I would then be either too exhausted or bored to dig deep. - Mathias Karlsson (@avlidienbrunn), Bug Bounty Forum AMA

To test new theories

  • Automation can help you test a theory quickly
  • Example:
    • Cracking the lens: targeting HTTP’s hidden attack-surface
    • James Kettle collected every host that he could hack legally (i.e. bug bounty programs)
    • He tried a new attack technique on all of them =>
      • He proved that this issue could be found on the wild & in real world environments
      • Some servers responded in a bizarre way to his probes => He found more variations of this issue

To monitor online assets

  • Keep an eye out for changes / new assets
    • The most important reason if you’re a full time bug hunter
    • Monitor your target’s online assets & get notified whenever they put anything new online
    • => Can give you an edge over other hunters who don’t know that this asset came online
  • Successful hackers like @naffy & @shubs already do this

What can we automate?

Environment setup

  • It can be laborious to install & configure a new environment
    • You have to download & setup tools, setup other things like a logging mechanism, etc
    • Mohammed uses git for logging. When he runs a tool, the tools output gets committed and pushed to a remote server along with the command used and the time of running
  • Very handy when your IP address gets blocked by an app firewall (like Akamai)
    • Just run one command to set up a new server & carry on your tests
  • Allow your tools to create new servers on their own
    • When you run a tool, it checks if you’re blocked. If you are, it moves itself to another box & continues
  • Many options:
    • Kali rolling on AWS: If you want a full blown Kali installation
    • Docker or Shell scripts: If you’re into customization & having only the tools you need
    • Terraform: Open Source project trying to manage the process of creating disposable environments
      • @mhmdiaa hasn’t tried it but it’s interesting & worth taking a look at


Recon and basic testing

  • There are many tools; the trick is to make them work together
  • To make the best universal tools, you should chain them & create workflows out of them

Monitoring (the past & the future)

  • Most people look only at what’s accessible now, but not what was accessible & what will be accessible

Monitoring the past

  • Companies that care about security now weren’t necessarily always like that

Sources of information

  • Google time filter

  • WaybackMachine (or WaybackUnifier, a wrapper around Waybackmachine)

    • Given a URL, it queries Waybackmachine for all its versions, tracks the unique parts from each version & creates a unified file that contains these unique parts
    • You can use it on:
      • robots.txt (trick shared by @zseano)
      • API documentation pages (trick shared by @filedescriptor)
      • JS files
        • Look for old endpoints & leaked API keys (it doesn’t matter if the syntax is messed up)
      • HTML pages to find comments disclosing sensitive information, more JS code, old endpoints, old input names
  • Old mobile app versions

    • May contain:
      • Credentials
      • Old endpoints

Monitoring the future

Monitor the following for changes or updates:

  • API documentation
    • Look for new endpoints
  • JS code
    • Usually the only white box part of the engagement, take advantage of it!
    • Updated code may contain new endpoints, leak secrets, introduce new bugs
    • You should master the art of analyzing JS code
  • Mobile app updates
    • Look for new enpoints & credentials that may be leaked
    • E.g: He created a change log for endpoints when working on the Instagram mobile app. He tested the new ones first FTW
  • Dev blogs & engineering blogs
    • Can give you great insight into new products & features
  • Google news
    • Useful if your target doesn’t have a dev blog
  • Everything else
    • Keep your eyes on anything that gets updated (as we’ll see later)

How to do it?

Example of workflow to help familiarize you with the concept of chaining tools:

  • Input can be a domain, registrant name or registrant email.

  • Tko-subs: test for subdomain takeover
  • Shocker: test for Shellshock
  • Second Order: test for second order subdomain takeover

Introducing Bounty Machine

  • By Anshuman Bhartiya & Mohammed Diaa
  • Purpose: allow researchers to compose complex workflows in a modular fashion (without modifying the code)
  • It will implement all the mentioned workflows & more


  • Runs multiple tools in a chain
  • Fully modular (possible to plug in any new tool)
  • Monitoring
  • Customized notifications

How to add a new tool

  1. Build a Docker image for your tool
  2. Define what data it needs
  3. Define what data it produces
  4. Specify whether you want to get notified when it finds something
  5. Find a place for it in the workflow where it can play with other tools (optional)

What happens behind the scenes

  1. Run the tool
  2. Translate its output into somethings that other tools can use
  3. Check if the output has changed since the last time
  4. Notify the user about newly-found results
  5. Pass it to other tools to perform further cheks
  6. Do this all the time for all targets

How can the community be more efficient?

  • What we do wrong: Rebuild existing tools too often

  • Why?

    • Unawareness of the existence of a tool
    • Boredom
    • Unmaintained projects
    • Different requirements
  • If your new tool isn’t helpful, you’re probably wasting time

  • Focus more on building new tools and extending existing ones

  • List of existing tools: https://bugbountyforum.com/tools/

  • List of tools we need: https://ideas.bugbountyforum.com

    • Contribute new ideas
    • If you like an idea, build a new tool for it


  • If something can be automated, automate it
  • Always monitor your target’s online assets
  • Dig into the past of your target
  • Your tools are good, but they’re better together
  • Share your tool suggestions
  • Tools should be easily connectable
  • Don’t reinvent the wheel (unless your wheel is rounder)

See you next time!