Charles Hooper

Thoughts and projects from a site reliability engineer

A Couple of Python Snippets

I haven’t updated in awhile but I decided to drop a couple of gists in here and call it a post. These snippets are incredibly simple and I don’t expect to “wow” anybody here, but I was asked for them recently and am posting them here.

Group words by their first letter in Python

#!/usr/bin/env python

"""Group words by their first letter"""

from collections import defaultdict

def group_by_letter(words):
    buckets = defaultdict(lambda:[])
    for word in words:
    return buckets

if __name__ == '__main__':
    print group_by_letter(['narragansett','brooklyn lager','magic
hat','dog fish head','shock top','ten penny','bass'])
    # Output: defaultdict(<function <lambda> at 0x7fc83416b2a8>, {'b':
    # ['bass', 'brooklyn lager'], 'd': ['dog fish head'], 'm': ['magic
    # hat'], 'n':['narragansett'], 's': ['shock top'], 't': ['ten
    # penny']})

Merging list of lists in Python using reduce

#!/usr/bin/env python

"""Merging list of lists in Python using reduce()"""

def merge_lists(list_of_lists):
    return reduce(lambda x,y: x+y, list_of_lists)

if __name__ == '__main__':
    my_big_list = [ [1,2,3], [3,4,5], [6,7,8], ]
    print merge_lists(my_big_list)
    # output: [1, 2, 3, 3, 4, 5, 6, 7, 8]

Common Single Point of Failure: People

Yesterday, when I arrived at my other job on my school’s help desk, I found out that my supervisor was not coming into work at all. This is OK; I enjoy the autonomy of working unsupervised. However, at this particular university’s help desk, my supervisor is the only person who can reset security profile information on student accounts. She is also the only person who assigns work orders to the technicians that work here. I’ll spare you the details, but probably 80-90% of our workload on any given day gets passed through this one person.

This is a serious problem. By passing tasks through a single person with no backup we are guaranteeing the collapse of our support system. I’ve seen this at other gigs and I bet you have, too.

Maybe it’s the one person who has access to the firewall or router. Or maybe there’s that only person who knows how to configure a particular piece of software or solve a specific problem. Truthfully, you’re probably that person and don’t even realize it. Ever get work-related phone calls (or worse: called in) during your “time off?” Red flag.

All of these conditions are single points of failures (SPoF). Too often, we sysadmins, developers, and engineers only think of SPoFs in terms of hardware and software. But if we look at what actually makes up the entire information system (hardware, software, data, procedures, and people), we see that we’re part of it too. This hoarding of knowledge often results in a failure of the system itself and very frequently makes existing failures worse.


A customer-facing database server stops responding. You’re not really familiar with what database(s) it serves but customers are complaining that it’s down or very slow. There’s another person that normally handles this system but they’re out of town and completely unreachable. You want to diagnose but you don’t even know how to access the system. Do you blindly reboot (risking data loss and corruption)? Sit and wait it out? Learn how to summon your co-worker’s spirit?

One very real situation occurred when I worked at a small Internet Service Provider. A very big client of ours called and said that a very large portion of their network was down (we managed it, too). Did I have the credentials to the router in question? No. Did the client? No. Who did? That one person did, the one who is usually too busy running around to return calls (incidentally, the owner). They did finally return our cries for help… 3 hours later. Was the problem difficult to solve? No. In fact, it was fixed within minutes of receiving the proper credentials. (Funny story, one of their on-staff techs plugged a network camera into the network and accidentally assigned their router’s address as the camera’s IP :)) Sure, this mistake was dumb, but did this client need to suffer degraded availability for these 3 hours? Absolutely not.


The obvious, and perhaps only, solution to this problem is to make as much of your knowledge available as possible. The more knowledge you offload from your brain, the better and more efficient the system becomes. I know to some this might seem a little counter-productive. After all, having this knowledge is job security…right?

No, absolutely not. Holding company knowledge hostage should never be how you ensure your job security (that’s a myth anyways).

With that being said, please don’t spend all your energy and effort on documentation only to abandon the effort a month later. I was speaking to a friend of mine earlier when he mentioned that very often he comes across company Wikis all the time that usually contain outdated information and haven’t even been logged into in 6 months.

Allow me to re-iterate, do* not* go on documentation sprees. Document everything when you do it and share that information *when *you do it. Regularly. Constantly. If you wait until you have alot of information to document, then you will probably become overwhelmed and just not do it. When I was in the Air Force, we had a saying:

The job ain’t over till the paperwork is done.

Simply put, add documentation into your regular workflow. The investment is small and the returns are great.

Controlling Django Apps With an Init Script

If you’re reading this, you probably already know that an init script is a specific style of script that allows you to control daemon processes. In particular, they are used to start processes at boot and terminate them at shutdown. What follows is an example script I use to control one of my Django FastCGI projects. This particular example was written for Ubuntu and Debian but could probably be modified for RedHat/CentOS or other distros.

Please refer to your Distro’s documentation on how to install and activate init scripts (hint: See /etc/init.d/ and the man page for update-rc.d if on Debian or Ubuntu.)



DESC="My Django Project"
ENV="env -i LANG=C PATH=/usr/local/bin:/usr/bin:/bin"

DAEMON_OPTS="runfcgi host=$FCGIHOST port=$FCGIPORT pidfile=$PIDFILE

test -x $DAEMON || exit 0

set -e

. /lib/lsb/init-functions

# Set up /var/run directory to write out pidfile
mkdir -p "$RUNDIR"
chown "$RUNAS" "$RUNDIR"
chmod 775 "$RUNDIR"

case "$1" in
 log_daemon_msg "Starting $DESC" $NAME
 if ! start-stop-daemon --start --quiet --oknodo \
 --pidfile $PIDFILE --umask "$UMASK" --chuid "$RUNAS" \
 log_end_msg 1
 chmod 400 $PIDFILE
 log_end_msg 0
 log_daemon_msg "Stopping $DESC" $NAME
 if [ -f $PIDFILE ]; then
 kill `cat -- $PIDFILE`
 rm -f -- $PIDFILE
 log_end_msg 0
 $0 stop
 $0 start
 status_of_proc -p "$PIDFILE" "$DAEMON" "$NAME" && exit 0 || exit $?
 echo "Usage: $0 {start|stop|restart|force-reload|status}" >&2
 exit 1

exit 0

Automating Webcam Snapshots and Uploads to Flickr

With gardening season right around the corner, one of my desires was to set something up that would allow me to take automated, regular snapshots of some of my plants and upload them to flickr. After a few cumulative hours I finally cobbled together the solution.

Taking the Snapshots

The first thing I needed to do was to take snapshots from an installed USB webcam and save them to a directory. This needed to be able to run from a cron script so obviously it needed to work without a GUI and without user-interaction. I read in a Webcam Howto that I could do this using streamer so I installed it and wrote a short shell script that would iterate through the video devices installed on my PC and run the snapshot command. You can view the source of this script here.

Uploading the Photos

Next I wanted to automatically upload the files to Flickr. At first, I tried using a script I found called which worked OK, but I also wanted to add my photos to a specific set which this script didn’t do. I probably could have extended its functionality, but this script didn’t use or implement the full Flickr API which made this task seem unnecessary.

Instead, I downloaded the Python Flickr API from Stuvel and in less than 90 lines I had working code to upload a directory of images to Flickr and add them to a given set. You can view the source to my flickr uploader script here, which I’m calling for now.


Here are my pretty pictures :) My apologies for the quality, I’m using a really cheap webcam.

Correlating Last Login Dates With Signup Dates From a MMORPG

Yesterday, I wrote a blog post detailing how I crawled an entire MMORPG’s player database via their search page. Since then, I have been analyzing that data in Minitab and trying to gain some insight into the state of affairs of that game. Today, I’m going to attempt to explain some of that data using statistics and common sense. In particular, we’re going to find out if there’s a relationship between when players join the game *and *when they stop returning.


I’m new to the statistics software package I’m using, Minitab, and I’m not aware of an easy way to take measurements based on dates. So, my first order of business was to convert dates in the database to an easier metric for analysis, “days since today,” which is simply today’s date minus date x. I did this in my database (MongoDB) prior to export by adding a “last_seen_days” attribute to all documents (records). This attribute is simply the difference between today’s date and the date that the player stopped logging in – measured in days. I then did the same for the signup date. This was quickly done in the MongoDB console in just a few lines:

> var today = new Date();
> var day = 60*60*24*1000;
> db.accounts.find().forEach(function (o) { o.last_seen_days = Math.ceil((today.getTime() - o.last_seen.getTime())/day);; })
> db.accounts.find().forEach(function (o) { o.date_joined_days = Math.ceil((today.getTime() - o.date_joined.getTime())/day);; })

The Scatterplot

I then exported my data to CSV, loaded it in Minitab, and created a scatterplot between these two attributes. What I got was this:

Last Seen Date vs Signup Date

For the uninitiated, a scatterplot **is a quick and easy way to visually see if there’s any type of relationship (correlation) between two variables. In this case, I used the signup date as my **independent variable (x) and the “last seen” date as my dependent variable (y). Overall, there is not *any real relationship between the signup date and the last seen date. However,* there are two significant items in this graph that deserve to have some attention brought to them.


The first and most obvious item is that there are not any points above the identity function. The identity function, or just f(x) = x, is the diagonal line directly across the center of the graph. This makes perfect sense since it’s impossible for a player to have their “last login” occur before they even sign up. I bring this up because this leads into my next observation:

There is a heavier concentration of data points plotted on or directly below the line of the identity function. For points exactly on the identity function, these are accounts that registered but were never logged into. For accounts *below *the identity function, these should be considered more significant to those who run the game. Why is that? Because, simply put, I believe that these accounts belong to players who went through the effort of joining; They signed up, validated their email address, logged in, and for whatever reason chose not to stick around. This is akin to the “bounce rate” so frequently mentioned in the context of web analytics.

It’s possible that these new players didn’t  understand the interface and left, or maybe they thought the game play was too slow, or maybe… this list could go on. What’s important is that some attention is paid here. Some effort should be made to discover why these players are leaving and the number of these players (or almost-players) should be measured, monitored, and analyzed. Decreasing this metric (“bounce rate”) should be a regular goal as these players represent a potential revenue stream for the game’s owner as well as a potential contribution to the game for the rest of the players.

The Histogram

While, in this case, the scatterplot helped us see that there are a noticeable amount of players who quickly “bounce” after joining the game, this type of graph doesn’t make it particularly easy to measure the magnitude of this phenomena. From observing this behavior, we next want to know how many players are leaving, or what our “bounce rate” is. Instead of first trying to quantitatively define the bounce rate so that we can measure it, it’s probably best if we first take a look at the total distribution of how long players are active for before leaving. For this, we’ll use the histogram of “Days Active”. Days active is simply days since signup minus days since last login.Here’s what we’ve got:

In this histogram, I excluded the lowest rank from being included in the histogram. I did this because I was more interested in how many potentially-active players were leaving, as opposed to junk accounts. As such, our definition of the bounce rate is already becoming more different than the bounce rate in web analytics.

Each bin (“bar”) in our histogram is 15 days wide. Knowing this, you can see from the histogram that the largest density of days active seems to be about from 15 days to 2.5 months. This chunk, while significant, doesn’t have much to do with our bounce rate mentioned above. What we’re instead interested in is the near-5% of players who become inactive in less than a week.

What’s Next?

If this were my game (it’s not), I would work on defining what level of bounce rate is acceptable and set some goals based on that. I would then look into the large amount of players leaving within the first 2.5 months and try to increase player retention. Finally, I would automate these measurements and have them displaced in a nice administrative dashboard (I’ve always wanted one of those) so that I have to see them all the time.

Screen-Scraping Search Results for Information Retrieval

Recently I found myself in a situation where I needed to gather a large amount of data from a website but there did not exist any API, index, or otherwise publicly-accessible map of the data. In fact, the only mechanism for uncovering data to be collected was a very limited search engine.

In particular, I was trying to collect a list of (living, non-banned) usernames from a web-based RPG I play so I could then download, parse, and store their profiles for further analysis. I needed all of the data simply because there also was not any way in which I could get a truly random statistical sample.

The game’s search engine has these limitations and features:

  • Search is performed on username only and implicitly places a wildcard after the search. For example, if you search for “bob” not only will “bob” be returned in the results, but also “bob123″ and “bobafett,”
  • If a given search returns more than 35 results than only the first 35 results are returned,
  • Results are sorted by username (alphabetically),
  • Usernames are case-insensitive and can only contain alphanumeric characters, i.e. {ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890},
  • Search queries cannot start with the character zero (“0″), but I happily overlook this,
  • The search engine does allow you to filter out players who have been killed or banned.

So, there I was, trying to crawl this game’s search feature using urllib and regular expressions. I first tried to search for “A”, then “B”, then ”C”, and so on, but there were some obvious flaws with this method. In particular, because of the limit on the number of results that can be returned, this method would only yield 1,260 usernames. This isn’t good enough because I knew from the game’s statistics page that I should be expecting a little more than 21,000 names!

The logical extension of that search method is to tack on an extra letter. For example, try “AA”, then, “AB”, then “AC”, all the way down to “ZZ” (or, erm, “99″ on this case). This seems alot better because, hypothetically, the keyspace is large enough to return more than twice as many usernames than what I need – I believe the math is (36^2)*35 or 45,360 usernames.

Unfortunately, this  method falls apart very quickly because there isn’t an even distribution of usernames across the keyspace. I could try to go one level deeper on the searches (e.g., “AAA” to “AAB”, and so forth) but now we’re looking at 36^3 or 46,656 search pages I have to crawl, so this method is out of the question.

Making matters worse, I am completely naive as to what the distribution of usernames might actually look like.I know what it looks like now, but moving forward I had absolutely no idea what to expect. (Just in case you’re curious, you can see the actual distribution – sans accounts that start with “0″ – below.)

Account Distribution by First Character

I decided, then, that I would start with “A” to “Z” to “1″ to “9″ and dynamically and recursively expand one level deeper if only 35 results were returned from the search. You can see this dynamic, search unfolding code here on Bitbucket (Python, lines 46 through 65).

The results were pretty positive. I crawled almost the entire set of alive, unbanned accounts in just over 2 hours (while I played video games and drank beer). I missed exactly 356 accounts, or about 1.6% of the population. While some of these may have been accounts that started with the character “0″ (remember, I couldn’t crawl those,) it seems more likely that many of these were aborted HTTP requests that failed and were handled by my ridiculous try/except:pass block.

Now that I have this data, it’s time for me to do something with it. You’ll hear more about that from me soon, I’m sure.

Multiple Vulnerabilities in Mingle Forum (WordPress Plugin)

Title: Multiple Vulnerabilities in Mingle Forum (WordPress Plugin)
Advisory URL:
Date Published: January 8th, 2011
Vendors Contacted: Paul Carter – Maintainer of plugin.

  1. Summary

Mingle Forum is a plugin for the popular blog tool and publishing
platform, WordPress. According to the author of Mingle Forum, “Mingle
Forum has been modified to be lightweight, solid, secure, quick to
setup, and easy to use.”

There exist multiple vulnerabilities in Mingle Forum, SQL injection
being among them.

  1. Vulnerability Information

Packages/Versions Affected: Confirmed on 1.0.24 and 1.0.26

3a. Type: SQL Injection [CWE-89]
3a. Impact: Read application data.
3a. Discussion: There is a SQL injection vulnerability present in the
RSS feed generator. By crafting specific URLs an attacker can retrieve
information from the MySQL database.

3b. Type: SQL Injection [CWE-89]
3b. Impact: Read application data.
3b. Discussion: There is a SQL injection vulnerability present in the
`edit post` functionality. By crafting specific URLs an attacker can
retrieve information from the MySQL database.

3c. Type: Auth Bypass via Direct Request [CWE-425]
3c. Impact: AuthZ is not performed for `edit post` functionality.
3c. Discussion: By browsing directly to the `edit post` page a user can
view and edit any page.

  1. PoC & Technical Description

4a. UNION SELECT 1,user_email,3,4,5,user_login,7 FROM wp_users #



  1. Report Timeline

12/17/2010 Initial email sent to plugin maintainer.
12/22/2010 Confirmation of first email requested.
12/31/2010 Correct email address obtained. Maintainer contacted again on
this date.
01/01/2011 Received response from plugin maintainer.
01/07/2011 Plugin maintainer releases update that addresses these

  1. References

6a. The WordPress Plugin page for Mingle Forum:

  1. Legalese

This vulnerability report by Charles Hooper < > is
licensed under a Creative Commons Attribution-NonCommercial-ShareAlike
3.0 Unported License.

  1. Signature

Public Key: Obtainable via

Picking Applications to Audit

I’m sure almost any programmer will tell you that at some point they felt the need to work on a project but had no idea what to work on. This happens to me, too, even when it comes down to choosing what applications or services I want to audit. With practice, I’ve come up with a pretty good list of categories to choose software from and I would like to share them with you.

1. Applications or Service You Use

This is probably to most obvious way to choose what application or service to audit. However, to me, it’s also one of the hardest. The reason why it is so difficult is because it involves breaking out of your mental “user” mode where you’re just using the application or service. I know that when *I’m *in “user” mode, I probably am not even fully conscious of the amount of software or services I use every day and how much I rely on them.

The solution, then, is to break out of “user” mode. Once out of “user” mode (and in “audit” or “attack” mode) everything becomes clearer. For example, I recently submitted a vulnerability to a pretty large service provider (I can’t say who yet); was the vulnerability in some back page or a piece of functionality that nobody uses? No, surprisingly, this vulnerability was part of a key piece of functionality that I actually use frequently.

2. Applications or Services That You “Like” or “Believe In”

This is more of an extension than the item above, but it’s worth stressing. If there’s an application or service that you think has potential, go ahead and audit it. New applications and services are often full of low-hanging fruit. By auditing these and reporting the vulnerabilities, you are helping to make these applications and services better.

3. Applications and Services That Make It “Economically Beneficial” to Audit Them

This is a no-brainer. If you’re being offered money to audit an application or service (and the person offering the money has the authority to give you permission to do so,) then this is a pretty good place to start. Google, for example, has a Vulnerability Rewards Program.


These are just three of the categories to look for applications or services to audit. It’s certainly not complete as there is a plethora of software out there waiting to be audited, but I hope that this gives you a good head start.

So You Just Received a Vulnerability Report. Now What?

It has come to my attention that there is still at least one group of people that doesn’t know how to responsibly deal with vulnerability reports. No, I’m not talking about the security researchers, the blackhats, or the script kiddies. It’s true that there is already alot of  controversy surrounding proper (or responsible) disclosure etiquette, but that doesn’t concern the group I’m referring to right now. I’m talking about the maintainer of the resource that the vulnerability report is for. That means you, project maintainers!

Before Receiving a Report

One of the biggest difficulties I’ve been having lately is finding contact information for a project maintainer or their security contact. On multi-developer projects, there should be at least one person who is responsible for fielding security-related reports that come in. They should have the ability to put fixes for security vulnerabilities on high priority for the developers.

Your security person’s email should be easy to find or guess. For example, Google’s is and this address is easy to find. You could also achieve this effect by using support/bug report forums, but be sure that any bug or report marked as security-related should be automatically hidden from public view. Regardless of if you decide to use a dedicated email box/alias or support forums for your security reports,  it is most important to make someone responsible for making sure that security reports are reacted to quickly and professionally.

After Receiving a Report

As soon as the first human has eyes on the report, it should be assigned to an individual and a confirmation should be sent to the person who provided it. Here’s is one such confirmation:

Thank you for reporting this to us! We have opened a security investigation to track this issue. The case number for your tracking is MSRC [XXXX]. XXXX is the Security Program Manger assigned to the case and he will be working with you and the team to investigate the issue. She will be following up with you shortly.

This step is super important because many of the people who take the time to report vulnerabilities to the vendor are only just waiting to release the report to the public. You don’t want to still be working on the fix when the news of your project’s security flaw is released.

Once you have a someone assigned to the bug, have them send a brief introduction. I never received my introduction from “XXXX” above, so I sent another email inquiring on the status of the bug. Here is the response:

Thank you very much for your message! My name is YYYY and I have taken over this case from XXXX. Earlier this week, the online services team has started testing a fix for the original issue you have reported, and we are currently verifying this, which includes variation testing and a review of the whole page. The added details you have provided to us in the below message will certainly help us in this process, so thanks a lot!

I will contact you as soon as the fix is deployed, and of course, if you have any further information or questions, please don’t hesitate to let us know.

This email is short, brief, yet contains all the information I would ever want to know. In particular it includes:

  • An actual person I can continue to provide information to
  • The status of the vulnerability/bug
  • The next step(s) in their review process
  • When I can expect to hear from them next

Just like the MSRC said I would, I heard from them when the fix made it to production. (In case you were wondering, it was 5 calendar days later.) At this point, they made arrangements to acknowledge me on the “Security Researchers Acknowledgment” page. While this is certainly a nice perk, you don’t have to do this.

A valid question at this point is “How long do I have to fix this vulnerability?”

It depends, but as the vendor that’s up for you to figure out. If you’re receiving a vulnerability report from a non-public source, then consider yourself lucky. The person reporting the vulnerability likely believes in responsible disclosure (inherent of the fact that you got the report first) and will be willing to negotiate on the timeline. Be honest with this person. I once waited two months to report a minor SQL injection vulnerability in a trivial web application because the (sole) project maintainer was on vacation when I emailed him initially.

Summary (tl;dr)

Many of the security researchers who will reach out to you believe in responsible (but full) disclosure. That means that your project’s security flaws will make it to the public sooner or later. To ensure the best experience for your users and the preservation of your project’s reputation, you need to handle your vulnerability reports quickly and properly. That means:

  • Making it easy to find out where to send vulnerability reports to
  • Communicating with the source of the report to confirm receipt of their report
  • Communicating with the source of the report your intentions for their report
    • Who did the vulnerability get assigned to?
    • What is the status of this vulnerability?
    • What are the next steps in the review process for this vulnerability?
    • When they can expect to hear from you next
  • Communicating with the source when you believe the vulnerability is fixed

What is all boils down to is this: React quickly and keep open lines of communication between your project and the security researcher who took the time to report a vulnerability to you. If you do this, you’ll minimize the damage to your user base and your reputation.

Finding Web Vulnerabilities

At the NESIT Hackathon on Saturday, I was talking to a group of people about discovering web vulnerabilities and I was asked “Which scanner or tools do you use?” The absolute shortest answer I can provide is “I don’t use a scanner.” Despite the lack of a vuln scanner in my toolset, I am still able to consistently find vulnerabilities in web applications. Here’s how:

  1. I first begin by finding or setting up an adequate test environment. If the project is freely available (aka open source/free software,) I set up a test environment. If the project is not freely available, then I look for a site that uses the platform or application I’m trying to audit. I don’t normally recommend the latter case, but if I’m testing a 3rd party web service then I don’t have any other choice.
  2. I then get familiar with the application. What does it do? What problem does it try to solve? What does a normal use case look like?
  3. Then I get really, really familiar with the application. In this stage, I’m really interested in the lesser-used functionality (such as error handling) and making things break. How does the application handle errors? How verbose are the error messages? Are any pages particularly slower than the others? Where does the application get most of its data? Request variables? Cookies? A database? A third party API?I usually do this step with the Firebug plugin for Firefox. I want to know exactly what parameters are being passed to the application, how those variables are being handled, and if (and how) those variables are being spit back out to the user.
  4. My secret weapon is not being afraid to look through the code, if it’s available. If the code is lengthy and I just want to take a cursory glance at it, I grep for “red flags.” Because most vulnerabilities are the result of unescaped, unsanitized user-input, these red flags are usually connected to user-provided variables. For example, If I’m auditing PHP scripts for vulnerabilities, I look for code referencing the $_GET, $_POST, $_REQUEST, and $_COOKIE variables. This step does wonders for finding Cross-Site Scripting and SQL injection vulnerabilities.
  5. In very small projects, like WordPress plugins, I’ll read through each file and try to figure out what story the code is telling. This is very much like reading a short story. I want to know what the application is trying to do.
  6. I’ll then read the code more in-depth. This is akin to analyzing poetry in a Literature class. Things like the actual names of variables and the syntax become much more important here. Now I want to know what the application is actually doing. I recently discovered Cross-Site Scripting SQL injection vulnerabilities in an URL field of an application that was trying to escape it’s input. The problem, however, was that the application was validating and sanitizing $url_var when the value of their user-input was $var_url. The combination of the lack of testing and the lack of error reporting allowed this bug to be introduced into production which created XSS and SQL injection vulnerabilities. Being able to read the code helps find other issues such as direct-request (authentication/authorization bypass) vulnerabilities.
    I can’t stress bug/vulnerability hunting outside the normal execution paths of any given application. If a blog platform seems pretty solid, try exploiting its dynamically-generated RSS feed. If a 3rd party web-service looks perfect, try exploiting its support forums or its help site in a different language or character set. Think outside of the box. That phrase is cliche, but it’s cliche for a reason.