Interesting post by Tantek Celik about the futility of building full JS websites, similar to the following:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title></title>
</head>
<body>
<script src="website.js"></script>
</body>
</html>
I'm tired of this antipattern myself, but there's a glimmer of hope. I explain how it doesn't always have to that obnoxious for the end user: Read my Hacker News thread.
Not everyone surfs the web with Lynx. Whilst I understand the need for websites to stand the test of time and be "curlable", there are still many 'appy' websites out there that fall back to plain HTML when we need it. I'm a big fan of archival services like Pinboard which I have been running now for 3-4 years under an archival account ― it has terabytes of raw HTML data that I can peruse at any time and do a full text search for any page. The bulk of those pages are very JS dense pages that somehow, through some wizardry on the site's backend; have managed to preserve some text for me to read. GoogleBot struggled with this not so long ago, but can now crawl fragmented URIs with a hashbang in them, as if the page was a normal HTML page. I suspect GoogleBot is a stripped down Chrome that renders the page and does a scrape. Infact GoogleBot has been proven to execute JS. SEO and search aside, there is (hopefully) some server black magic that detects browsers like Lynx and then serves us some 'neckbeard text'. Webapps which are not doing that are probably not worth your time anyway.Also noteworthy:
Also, there are great proposals by the W3C to get webapps working without the need for JS. A lot of the behavior you see now in webapps could be done with simple HTML tags.
Also Noteworthy