If everything seems under control, you’re not going fast enough.
Mario Andretti
I have already been using Shiny and Dash for a couple of years for all kind of internal little projects.
Some resulted in pretty large and complex analytical apps which my colleagues and I are using on a day to day basis in our investment management and business development.
However, deployment remained a bit of a bottleneck. To get around the limitations of single threaded shiny apps we eventually started to create Docker images.
This is a pretty fine solution for apps used by a small team and has served us well. Nevertheless, over time and with a growing number of apps the management of containers and respective authentifications became a mess.
We really appreciate the achievement, the guys from Open Analytics have reached with shinyproxy, a "novel, open source platform to deploy Shiny apps for the enterprise or larger organizations" (feel free to have a look) ... and not only Shiny apps but also applications written in Dash, Flask as well as Rmarkdown scripts or Jupyter notebooks.
On top of that, shinyproxy includes an authentification service that now allows us to precisely manage user access to different apps. Nevertheless we only implemented this fantastic solution when a small marketing project gave us another push.
I frequently publish research papers covering various finance topics for Amadeus Capital, a multi-family office and wealth management firm and a sister company of Amadeus Quantamental.
The most recent article (Beware of the lies of history) covered the topic of data revision in economics. It is a well known problem in macroeconomic research and model backtesting that economic statistics are released before all underlying data is collected and revised later once the respective agencies have gathered further information.
Most data providers such as Bloomberg keep the most recently published (and by definition most accurate) time series which can easily result in an inflated sense of the timeliness and accuracy of economic indicators. It is easy to overlook or ignore this problem.
Fortunately, the St. Louis Fed runs a nice database, called ALFRED that keeps archival economic data and I used this data to compile some statistics showing the magnitude of revisions and the impact on the assessment of lead/lag relationships and correlations between economic data and the financial market.
This data is accessible in R and Python through a handy API.
Setting up the Shiny app was simple but ...
I have seen some academic projects that were obviously based on Shiny in the past and didn't like the volatile performance resulting from the fact that they were single threaded. For a limited number of users it can still be ok as long as the app doesn't include any long-running tasks but for an application (like mine) that loads large datasets it doesn't work even for two simultaneous visitors.
Obviously having manually managed docker containers like for our internal applications with one port per user was not a solution either.
Eventually I turned to shinyapps.io, the public service run by RStudio and deyploed the app there. However due to the size of my datasets I immediately hit the size limit for the free version and had to upgrade to a paid subscription. Nevertheless, even then I was not pleased with the performance and scalability of the app.
So we installed shinyproxy on our server and eventually managed to configure it in a way that allows the automatic adding of new public users.
This turned out to be a non trivial task also because on many aspects, community support is still pretty limited. We intend to add further public applications over time whenever we have an idea we consider worth sharing. In fact, a second application dealing with portfolio optimization is on the way and already up and running in our test environment.
Shiny, Dash etc. are such powerful tools but the ditch between data scientists, IT and (potential) users of the applications can be quite deep. Capturing their potential can be challenging especially for smaller organizations and independent researchers.
Please feel free to reach out to us.
Many thanks to Leigh Adnett, our server genius and Andreas Unterberger, our docker pro for making this possible. And of course a special thanks to the team from madewithapixel for the beautiful landing page and this blog function.