We were contacted by a financial services startup that had build a proof-of-concept Shiny application to sign up prospective customers. The app worked like this:
- Let the user provide access to their existing 401(k) information.
- Given the risk portfolio associated with their existing portfolio, run a backtest comparing that portfolio’s performance against the startup’s own recommendations. Results were presented in the form of line graphs and summary stats (best year, worst year, projected balance growth, etc.).
- Allow the user to sign up for an account and use the startup for their 401(k) allocation moving forward.
Pretty good for a proof-of-concept, right?
They wanted help polishing the Shiny application into a production-ready, customer-facing tool, so our first step was a phone conversation to list the required changes and additional features.
Finding the Optimal Solution
That’s when it became clear that they didn’t need Shiny to do everything – calculating backtest results and creating plots really benefited from R, but other processes, like signing up users, was no better in R than any other standard framework for web apps.
So we made a recommendation: build a traditional sign-up/sign-in website, and separate the R code into an API using OpenCPU.
The conversion process could not have been simpler:
- Split code into single-purpose, well defined functions. With a well-structured Shiny application this can be as simple as turning each
observeblock into named functions.
- OpenCPU requires packaged code, so if the Shiny app wasn’t already part of a package, it’s time to create a package skeleton. The function
devtools::createsimplifies that process.
- Install the
opencpupackage for local testing with the function
From that point on, it was a routine development cycle of editing the R code, then rebuilding the package, and testing with cURL to send HTTP requests.
This structure isn’t always ideal, but in this case we think it served the client very well – they didn’t need to rely on R (or Shiny) for their entire website, which could have had major cost implications when their sign-up rate takes off.
Instead, they were able to leverage the experience of a website design team (who didn’t need to know anything about R) to build a great-looking website to attract clients. At the same time, they could rely on our extensive experience with R to convert the backtesting code into an HTTP API that could be called from front-end or back-end code.
We think this was a best-of-both-worlds scenario for our client.
They got what they needed to demonstrate the value of their services to prospective clients, but they retained maximum flexibility moving forward in terms of division of service dependencies, division labor expertise, and freedom from looming software license costs.
Putting the code online was equally simple:
- Start a bare Ubuntu (or other Linux flavor, if preferred) server. We use AWS, but any cloud provider where you can get root shell access will work.
- Follow the four (only four!) steps provided by the OpenCPU website: https://www.opencpu.org/download.html.
- At that point you should be able to access http://yourhost/ocpu/test and see a page like this:
The last step is to install the custom R package(s), along with any dependencies (using
devtools::install_github does a great job with this).
/etc/opencpu/opencpu.conf contains some configuration options worth noting:
- It’s possible to change timeout defaults – for example, the default timeout for a POST request is 90 seconds, but function calls may need to run longer than that.
- Some use cases require large data sets built into packages: it’s possible to pre-load particular packages to reduce the start-up time required for each call.
- Presumably many use cases will want to disallow CORS (cross-origin resource sharing, or in other words, allowing calls from othersite.com to use your API at yoursite.com). This is defined simply as a true/false in the configuration file.
It’s also worth noting that by default OpenCPU installs with Apache as its web server. This works fine and uses port 80 by default, but if Nginx is preferred, the default server block for OpenCPU can be commented out, and the virtual host listening on port 8004 will still be listening for proxied requests.
Lastly, we recommend using TLS/SSL in all cases. In any standard setup, it takes less than five minutes to complete the free and automated procedure with LetsEncrypt.