In February, I set out to solve a problem with client-side data visualization: How can I get data from the web into a format that my graph can use?
The usual answer is to scrape all of the relevant fields, parse them somehow, and store them in a database, then set up some interface for the graph to consume. For a lot of projects, this makes a lot of sense-- But it introduces a lot of moving parts. I started to wonder if a database was even necessary for small, one-off graphs.
I spent a few days mashing up Node, Request, and JSONSelect into a kind of proxy-plus-filter. This meant I didn't have to worry about cross origin issues, and I could strip out unnecessary fields. Sieve was born, and it was actually useful!
Over the next two months, I added the ability to combine multiple requests into one, stream results back via Websocket, cache, parse, and generally expand the list of probable use cases.
No comments:
Post a Comment