From time to time, people will ask me questions about my work in software development.  Typically, I find myself unprepared to answer these questions.  It's not that my work is particularly challenging; it's simply that I've never dedicated time to carefully think about how I would describe my work.  Thus, I typically focus on the output or product of my work, rather than the process of what it is that I do to get to those results.

Fortunately, I worked on a small side project in the last two days, and I think this is a very good example of what it is that I do at work.  My intention is to give a clear picture of the process I go through, highlighting the key parts.  This may inspire some people.  Hopefully, it won't confuse people.

Everything starts with a problem.  The goal is to solve the problem.  In my case, for at least a few months now, if not years, I've been considering how I would add graphs and charts to my website.  

The next step is to analyze the problem, identify any potential roadblocks, resolve those roadblocks, and outline a viable way to implement the solution.  In my case, usually, a charting library uses Javascript to render charts from data, usually in some form of JSON format, from a database.  In order to add charts onto a webpage, you would need to import the Javascript charting library, somehow get the data you wish to render, and then call the charting library function to render the chart into a specific area of the webpage.  While these requirements are straightforward for a static webpage, they introduce some subtle challenges in the context of a blog or wiki.  Looking at each requirement:
  • Importing the Javascript charting library poses no challenge.
  • Getting the data is a challenge.  There's a few options, but they are all imperfect.  
    • In order to not have the blog post depend on the database, the data could be hardcoded into the blog post.  However, if the data were ever updated in the database, it would not be updated in the blog post.  In addition, this may not work well if the dataset is enormous.
    • The data could be dynamically fetched.  Unfortunately, this would embed Javascript code into the blog post, making the database-querying Javascript code a dependency for the blog post, which is a bad idea, as explained below.
  • Embedding Javascript code into a blog post (content stored in a database) would tie the blog post to the charting library that the Javascript code depends on.  Changes to the application architecture or charting library would potentially require updates to the blog post.  Otherwise, the blog post could break the website.  In addition, the blog post would require Javascript to show properly.  In summary, it's a bad idea to embed Javascript code into blog posts.
Thus, to directly use a Javascript charting library in a blog post or wiki is not the answer.  However, if the rendered chart could be converted into a format that can be easily embedded, that could work.  

After doing some research, I discovered that some charting libraries, such as Google Charts, supported exporting to PNG format.  In addition, Google Charts renders charts in SVG format, which is itself easily embedded.  Thus, there were two potential solutions to the roadblocks I identified earlier.

The plan became clear: make a tool to convert a Google Chart into PNG or SVG, and make it easy to copy and paste.

The next step is to make the tool.  Because Google Charts supports many various forms of charts and graphs, I initially wanted to just let the user specify a URL.  The tool would then navigate to the URL, render the page, take a screenshot, save the image, and send it to the user.  Unfortunately, this approach came with several risks:
  • The URL would need to be publicly accessible by the tool.  Since images were being saved on the server, it could be retaining sensitive or confidential information.
  • The webpage generating the chart would need to be structured in a specific format to work with the tool.
  • The webpage could contain malicious Javascript, which could potentially compromise the server.
After evaluating these risks, I went back to the drawing board to see if I could come up with an alternative plan.  I reminded myself that the primary goal was to solve the issue for me, not for the entire world.  Thus, I could implement a tool to generate charts for some common chart types and features that I needed.  In order to keep the data private, I could let the user input the data in an input field, and then generate the chart on the fly, without going back to the server.  Thus, I settled on a satisfactory plan that would solve my problem, but would also be useful enough for many other people: make a tool to generate a Google Line, Area, Bar, Column, or Pie Chart from CSV-formatted data, convert the chart into PNG or SVG, and make it easy to copy and paste.

Next comes the step of implementing the plan.  This is when I just did heads-down coding, referencing the Google Charts API.  This part actually only took about a few hours, or half a day.

After writing the code comes the step of testing the code, and fixing bugs and issues, making sure it was usable and even pleasant to use.  This step actually consumes more time and effort than just coding, because this is when some edge cases come to the light, and some weird behavior need to be debugged.  For me, I had to figure out why Material Design Lite (MDL) was clashing with Nuxt.

Finally, having tested the tool to satisfaction, it comes time to releasing it and telling people about it, which is what this post is all about.  And, this is what I do at work.

You can explore the tool here.

This is an example of the output:

Written on July 14, 2018
Updated on August 6, 2022. © Copyright 2023 David Chang. All Rights Reserved. Log in | Visitors