June 28, 2020

4524 words 22 mins read

The JavaScript Ecosystem

The JavaScript Ecosystem

Screenshot of Netscape Navigator in 1995 (Image credit: Wikipedia)

JavaScript started out as a simple extension for the browser but has become so much more. In part, this is true in building on rich concepts going back to Lisp. Along the way, it has challenged the givens of programming and given us a high-performance flexible language along with rich libraries and rich tools. We’re just beginnin

g to discover the possibilities.

Back to the Future

I’m excited about the JavaScript ecosystem. The success of JavaScript is surprising. After all, the language was originally designed in 10 days and had many warts in its initial implementation. But it was built on deep computer science principles going back to one of the seminal ideas in software the Lisp language. Lisp represented a major departure from the idea of computers as very high-performance calculators. It manipulated symbols rather than numbers and took the idea of stored programmed computing to the next step by allowing the program to operate upon itself.

Lisp was eclipsed by mainstream programming languages in the 1960ords because the machines of the day were very expensive and had limited resources. Thus, the focus was on calculation and efficiency. The limited capacity of the machines meant that programs had to be extremely preprocessed to fit into the computers. Thus, they were precompiled into the machines' low-level language and only those portions of the program that were necessary were loaded into the computer.

As part of this, the idea of strong typing was considered necessary so that the compiler would generate the appropriate instruction for integer or floating-point addition. I was reminded of this in 1970 when I was looking for a topic for my bachelor’s thesis and I was discouraged from writing a language based on dynamic typing because it couldn’t be efficient for complex structures since it would have to revisit the type information for each element and sub-element again and again.

Such restrictions were relaxed for scripting languages, which were meant to automate simple keyboard tasks such as running a series of commands.

JavaScript, as its name implies, was initially considered a scripting language for adding simple capabilities to the browser, with Java being considered the heavy-duty language for “real” extensions. The languages are essentially unrelated, with the term “JavaScript” being chosen for marketing reasons.

And something surprising happened. JavaScript became a high-performance language bringing the basic ideas of Lisp back to the future. The ideas of 1958’s Lisp were remarkably ahead of its time.

One nice thing about the heritage of Lisp is the orthogonality the rich set of mechanisms are built on a simple base. While there is an emphasis on performance on high-end browsers, it also allows for compact implementations where appropriate.

How and Why JavaScript

In 2014, I wrote about the growing importance of HTML5. Writing applets in Java was problematic because the apps had full access to the host computing environment. Confining JavaScript to the browser meant the code was portable and sandboxed.

That also meant that the code would be sloppy, at least at first, because the damage was limited to an errant web page. The applications ran outside the security perimeter since the servers could trust any code on the client-side (AKA, in the browser).

Google has a huge financial interest in browser performance since a few tenths of a second in browser response time could translate into billions of dollars in revenue. Thus, the great incentive to improve the performance of JavaScript and it worked far better than I expected. I had been using JavaScript for simple projects when my friend Dan Bricklin commented that JavaScript had become a high-performance language with Google’s V8 engine playing a key role.

One of the techniques used is JIT or Just-in-Time compilation. As computer performance increased, the percentage of time spent translating machine code relative to the total execution time become very small, thus eliminating the need for precompilation. This meant that the JIT compiler could take advantage of the on-the-ground knowledge, such as exactly which processor was being used and what special feature it may have.

Part of this was using the knowledge of the types of the parameters. Thus, if a piece of code is being called with an integer parameter, that method could be compiled for just that case. The compiled code would do a quick check for the parameters to make sure that, at runtime, indeed, the parameters were integers. For a simple operation like addition, that might be high overhead, but such checking gets pushed back into the containing method, and a whole set of operations can be made safe and efficient because the type checking could cover a variety of methods.

All this while preserving the ability to edge cases on the fly rather than having to check for them every time, thus allowing more streamlining that precompiled code.

Another technique is memoization that can short circuit the entire computation by recognizing that the parameters are the same as a previous invocation and that the code has no side-effects.

These techniques work well when the code is regular, but that is a very reasonable assumption akin to the locality assumption that allows both modern operating systems and machine learning to work so well. Oddball cases can cause performance problems but aren’t fatal.

Declaring types in programming languages isn’t merely about performance, it is also about safety. Having strong typing means that bugs are discovered at compilation time rather than lurking until the program runs. It turns out that using types for safety and types for runtime performance would be decoupled. I see Microsoft’s TypeScript as a way to have a conversation with the IDE Integrated Development Environment rather than as a necessity for compilation.

The IDE can help me identify problems and, just as important, refactor code by following the implications of my declarations. If I try to add 12 to a string, “12” raw JavaScript would simply convert the number to a string and give me the result “1212,” which may or may not be what I want. If I use TypeScript, the IDE would flag it, thus forcing me to be explicit about what I want. Or not I could just tell it to ignore the type information by using the “any” type. I get a chance to decide what is appropriate for the code I’m writing.

TypeScript is an important part of scaling code. I rely on its assistant to manage my own code since I need a housekeeper. It is more important for large teams of programmers. This is why Google has become a key supporter of TypeScript and uses it for projects such as AngularJS.

Visual Studio Code (VSC) is part of this ecology an IDE well suited for TypeScript and written in TypeScript! It is built on Electron, which is, in turn, built on Google’s Chromium engine that is behind both Chrome and (now) Microsoft’s Edge browsers.

Much of this, including the TypeScript compiler and VSC, are open-source with any contributors with GitHub (now owned by Microsoft) being the shared repository.

Note that both VSC and GitHub support a wide variety of languages, but I see JavaScript (any mention of TypeScript includes TypeScript) as having a special place while coexisting with other languages.

JavaScript doesn’t have to be everything for everyone. It doesn’t have to support complex asynchronous code and instead can focus on a single-threaded model that eliminates much of the complexity of interlocking code while still supporting asynchronous code.

An operation such as fetching data from a server is done asynchronously. Traditionally this has been handled with callbacks that are invoked when the operation is complete. That pattern can become complicated very quickly. The async/await keywords greatly simply the code. For those used to more traditional languages, you need to wrap your head around the idea that the code is really a callback behind the scenes, but you can read it as linear code you just need to choose whether you want to read the A path or the B path.

In a sense, it’s like my view of removing the GOTO from code. It allows you to use your understanding of the static code as a way to understand the runtime or dynamic behavior.

JavaScript does concepts such as worker threads that allows for spinning of truly parallel operations but with strong constraints on the interactions with the main code, thus preserving the simplicity.

The lack of static typing means that JavaScript classes are very different from classes in languages such as C++ or Java. Though the class keyword has been introduced into the languages, the JavaScript classes are prototypes that can be copied and shared, and manipulated at runtime. Coupled with the creative use of lambda expressions, one can implement various programming styles, such as applicative programming. However, it is important to recognize how JavaScript prototypes differ from static classes.

Together these innovations have given high-performance production language that allows the developer to choose the style of programming and flexibility that is appropriate for a project.

JavaScript is still a work in progress. Future developments are needed to improve the safety of encapsulating code and increase the ability to operate on the code itself. There is a danger of adding too much, and maybe we’ll need a reboot but, for now, it’s my language of choice.

Oh, there is an optional escape hatch WebAssembly. It is the equivalent of low-level machine code, but that’s a topic in its own right.

Environments

It is useful to distinguish between JavaScript engines and platforms. Engine is a term of art that is used because neither interpreter nor JIT (Just in Time) compilation captures the essence of how JavaScript is implemented.

We also have platforms or environments which wrap JavaScript and support applications.

NodeJS

The language itself is only part of the story. Without the need for precompilation, the code itself is portable. Hence the ability to import code into the browser from disparate sources on the fly and composite them. We’re no longer confined to the browser.

While Ryan Dahl’s NodeJS was not the first server-side JavaScript environment it has become the standard. It takes full advantage of V8 (though it can use other engines) to provide a high-performance server environment with all the benefits of JavaScript. It also played a role in disentangling JavaScript from the browser DOM (Document Object Model).

Deprecated functions such as bold() (which returns a string surrounded by “bold>/bold>") are part of that legacy. Similarly, the implicit browser namespace has evolved to explicit namespaces for various environments.

Part of what makes NodeJS so powerful is the full support of the asynchronous capabilities of JavaScript. It also shared the powerful debugging tools with the browser making them available via VSC.

For very high-performance servers it’s easy to spin up additional instances of the server to run in parallel. As I wrote, my own site is written in JavaScript (OK, TypeScript) and runs very well. I do spin up additional instances as beta sites because it takes only a few minutes to startup a Digital Ocean droplet for my purpose.

For more complicated sites, there is a lot of tooling, but that is another topic.

NPM is the Node Package Manager that makes it easy to tap into a rich library of open source (and proprietary) packages typically stored on GitHub. Modules can be very simple or be very sophisticated as in React for implementing sophisticated websites.

Deno

Ryan Dahl looked at what he accomplished with Node and, as with any great developer, looked over his work and saw what a mess it was. It is a complicated piece of C++ code that requires Python to put it all together. The module capability had gotten bloated.

His new project, Deno, is very promising. It supports TypeScript natively and replaces the modules with direct use of libraries. While it’s hard to displace Node, I’m excited by Deno and see it as complementing Node as a platform of choice, but it may take a while.

Deno is built using Rust which was developed by Mozilla as an alternative to C++. Rust has also been embraced by Microsoft.

As an aside, VSC was written by one of the developers of the Eclipse IDE. We’re getting the benefits of a lot of experience contributing to the JavaScript Ecosystem.

ECMA TC53

Peter Hoddie, best known for writing Apple’s QuickTime Graphics library, initiated ECMA TC53 JavaScript modules for embedded systems. It reminds me of the early days of microprocessors when the chip people saw processors simply as a way to save a few chips in a circuit. Us software types saw it very differently very inexpensive computers we can own. That was the genesis of Steve Wozniak’s original Apple computer, which used the low-priced 6502 processor. I could then take that “toy” computer and turn it into a powerful financial analysis tool.

ECMA TC53 allows us to rethink how we build devices and hardware in general. We can now focus on the capabilities and well-written software rather than the accidental properties of particular hardware and worrying about the pitfalls of C++ programming. Moddable’s JavaScript engine is called XS.

I’ve described how high-performance processors have allowed us to embrace the programming techniques to make the V8 engine so powerful. Conversely, we can now take our understanding to very small systems and gain the benefit of JavaScript even if the performance isn’t as high. The intellectual synergy is what is important.

JavaScript Inside

There are a number of other engines and platforms, including Johnny-Five, JerryScript, Espruino, WebOS, etc. which implemented JavaScript (or, sometimes, subsets).

Each one is worthy of a discussion in its own right. What is important is that these engines have very small footprints. One point that the Espruino people make is that sending source (or tokenized) JavaScript can be more compact than using compiled code.

Users can add widgets and capabilities to the platforms without threatening the integrity unless the platform providers choose to expose features that might be risky. This is similar to running apps on a Smartphone where you need to approve access to capabilities.

Browser Environments

The browser is more than just a JavaScript engine. It’s a virtual and portable operating system. It has become a first-class application environment with great support for mobile (AKA, small portable) devices.

Progress Web Apps (PWAs) have a number of advantages over so-called native applications including taking advantage of the full tooling of the JavaScript ecosystem. PWAs don’t need to be “installed.” They are web pages (more properly, applications) that can be available offline. They are indistinguishable from native applications but as convenient as visiting a web site.

The term “Progressive” is used because applications can progressively take advantage of features available at runtime another reminder of taking advantage of the on-the-ground conditions.

It is up to the underlying platform to choose which native capabilities are made available, and the user can decide how much to trust the app.

The New Set Top Box

One area where I expect browser engines to become dominant is as a TV viewing platform as the new Set-Top Box (not that there is room on an LCD). The growing variety of boxes Roku, Amazon Fire, Xfinity Flex, LG’s WebOS, 小米 etc. recalls the days of a variety of incompatible PCs that limited the ability to write portable software and to reach a wide market. The browser already supports TV viewing, and I find the experience better than the native ones in those boxes.

There are some user-experience (UX) elements that are missing. One is relatively simple standardizing the use of the keys on a handheld controller since the keyboard is often too clunky to hold while lounging. Another possibility is to have an API that makes it easy to write control applications so that users can create their own controls using smartphones. Instead of picture-in-picture, perhaps, previews can be on the local device for one person while the family can continue to watch the large screen. Users can then implement their own voice navigation and more.

Another reason for using the browser as the environment is that the nature of TV is changing. While a tablet might be used to control a large TV, it is also a first-class viewing device in its own right. We can have multiple viewing services and controllers that can work individually or in concert (pun partially intended). We need tools that allow us to create apps that work across and within each of the devices and environments.

Designing such an environment would be a good project for the CT Society as a vendor-neutral organization with connections to the TV broadcast industry. We already see elements of such an environment in Chromebook and LG’s WebOS.

Beyond the REST

JSON-based APIs have been associated with RESTful interfaces, but those are a subset. The use of HTTP/JSON is a very common pattern and makes it easy for JavaScript programs to provide and use services. Of course, the interface isn’t restricted to JavaScript, but I considered it part of the ecosystem because of the use of JSON as the data format.

The stark simplicity of JSON is in sharp contrast with earlier protocols that tended to use binary data structure so were inscrutable. We also had XML interfaces on the path to JSON, but protocols like SOAP tended to be complicated, having inherited paradigms from the static typed languages. In addition to its simplicity, JSON payloads are easy to extend. It doesn’t matter if a number is passed in a string field as long as it can be converted to a number.

Using JSON means that it’s easy to evolve the interface and add new bundles of information that can be passed on without interpretation. This nicely matches the design point of the internet intermediate applications can transfer data without having to understand all of it. The information is not fully trusted so that it’s safe. In fact, this lack of trust leads to resilience because it means verifying the contents where it is actually used in context.

Systems Thinking Thinking outside the Box

Literally.

We can think of meta-devices that span multiple pieces of hardware, which is the inverse of multiple applications sharing a single device.

A weather reporting system can have monitoring software running on multiple computers, each contributing to the whole even though no particular system is critical as long as we can create an overall map of the weather. JavaScript facilitates this, making code portable without artifacts of compilation. Even better, using Deno so that the libraries used are runtime resources.

This is a way to think about an Internet of Things an IoT not as a carefully engineered systems with well-defined components but capabilities that can be composited dynamically using best-efforts connectivity. The devices can be general-purpose platforms such as a Raspberry Pi with weather monitoring being one application on either a dedicated or shared device. Rather than carefully engineering each connection, we can take a resilient approach. In the weather monitoring examples, if a few systems go down, that’s not a problem since the information comes from the collection and is not critically dependent upon any single node. If there is a connectivity failure, the data can be stored and made available later. There is no need to distinguish between a failure of a link and a failure of a device. It’s about dynamic relationships among communities of things. This approach of finding value in the whole has many advantages in allowing innovation without relying on a provider’s guarantee of reliable delivery. Instead, we can take advantage of any opportunity to connect.

Cloud-hosted Resources

The cloud grew up as a way to host massive computing resources. In an early generation, we used the term timesharing. We’re now evolving a more decentralized approach that includes local computing and using these shared facilities (AKA, the cloud) as a resource. Many applications that were cloud-based such as machine learning, now run locally. At least in part.

The JSON APIs are part of the ecosystem, and so is portable code. You can think of an AWS lambda as a snippet of code that can be invoked independently of the particular hardware and can run locally or remotely. Since JavaScript can be transferred as easily across systems, we can make creative use of resources that match our needs by being able to take advantage of the computing power, locality, storage, and other characteristics dynamically. This is a reminder that it is about the application and not any particular computer hardware.

Beyond JavaScript

While it is convenient to see everything in terms of JavaScript, we aren’t limited to one language. It coexists with the Python community that has some similar characteristics and rich libraries for data analysis and other communities. There are also more traditional languages like C# and Rust for situations where we need more intense parallelism and low-level access.

The power of the JavaScript ecosystem is making it easy to write applications that add capabilities incrementally. It facilitates portability both because the code itself is portable and because using simple JSON APIs facilitates cooperation across systems. The V8 engine is one example of the power of a large ecosystem that can invest in improving performance beyond what is supposed to be possible. The same ecosystem also gives us small engines that extend the reach.

My website is, of course, one example of using JavaScript and how it facilitates development. I recently added a small number of lines of JavaScript to add video streaming capabilities. It took that little because of the synergy with the larger ecosystems. That synergy is what makes this ecosystem so exciting.

The Future of Consumer Technology

I see JavaScript as part of the larger trend of using software to define products and empowering people to be contributors and not just consumers though that word “consumer” won’t go away any more than “dial” does for phone calls and “film” goes for recording MP4 video.

The IEEE Consumer Electronics Society was originally part of the Broadcasting society in the days when you’d have a new industry for each technology. Today a television is just one more app. JavaScript is at the nexus of this transformation.

A version of this article was originally published in the IEEE Journal.

Written by Bob Frankston, Member Board of Governors at IEEE Consumer Electronics SocietyFollow CircleID on TwitterMore under: Web

Author: Bob Frankston

Date: 2020-10-31

URL: http://www.circleid.com/posts/20201031-the-javascript-ecosystem/

circleid.com

Internet Governance and the Universal Declaration of Human Rights, Part 6: Articles 18-19 (2020-12-01) Articles 18-19: Freedoms of Thought and Opinion Co-authored by Klaus Stoll and Prof Sam Lanfranco1 Internet Governance like all governance needs guiding principles from which policy making and acceptable behavior are derived Identifying the fundamental principles to guide Internet ecosystem policy making around digital citizenship and around the integrity of digital practices and behavior can and ..
Is 5G a Race We Want to Win? (2020-10-29) There is an interesting article recently published in the English version of a South Korean newspaper the ChosunILBO that talks about 5G in China According to the article the Chinese 5G rollout is an expensive bust There are a number of interesting facts disclosed about the Chinese 5G rollout First its clear that the rollout is using millimeter wave spectrum The article says that the 5G towers in .. Is 5G a Race We Want to Win?
Donuts Acquires Afilias (2020-11-19) Donuts and Afilias announced today that Donuts is acquiring Afilias in a deal that is expected to close in December 2020 for an undisclosed amount The combined entities will support over 25 million domain names spanning well over 400 TLDs The deal will not include certain Afilias businesses such as the mobile software and registrar businesses which will remain with Afilias original group of invest..
Criss-Crossing AI With the Future of Work (2020-11-02) Every few years; some self-proclaimed academic imparts an article on the future of work with conflicting information from various experts leaving many uncertain about its impact on jobs skills and wages In less than a year these same scholars will be writing about the future of labor and given the speed of innovation by the time these articles are published they will be made obsolete Based on the ..
Internet Society Extends Its Significant Financial Support Commitment to the IETF (2020-12-01) According to an announcement on Monday the Internet Society has agreed to extend its existing financial commitment to the Internet Engineering Task Force IETF for an additional term of six years The commitment of up to US $71400000 over the next six years provides a truly transformative opportunity for the IETF for which we are immensely grateful says IETFs Board Chair Jason Livingood Organization..
Internet Society Seeks Nominations for 2021 Board of Trustees (2020-11-10) Are you passionate about preserving the global open Internet? Do you have experience in Internet standards development or public policy? If so please consider applying for a seat on the Internet Society Board of Trustees The Internet Society supports and promotes the development of the Internet as a global technical infrastructure a resource to enrich peoples lives and a force for good in society ..
ICANN Should Keep Content Regulation and Other Arbitrary Rules Out of Registry Contracts (2020-11-23) The domain name system is not the place to police speech ICANN is legally bound not to act as the Internets speech police but its legal commitments are riddled with exceptions and aspiring censors have already used those exceptions in harmful ways This was one factor that made the failed takeover of the ORG registry such a dangerous situation But now ICANN has an opportunity to curb this abuse and..
Goodbye Marilyn Cade (2020-11-09) In memory of Marilyn Cade 1947-2020 Marilyn Cade was an exceptionally hardworking always gracious leader in ICANN and IGF She was a strong supporter of US interests at ITU and a member of the US delegation at the WCIT Dozens have spoken of her at a memorial site including Vint Cerf: Marilyn was an elemental force in the ICANN IGF and policy worlds She was an advocate who could be counted upon to s..
First RIPE NCC “Seizure of IPv4 Addresses” – Is This the Beginning of IPv4 as Collateral? (2020-11-25) In a publication released on October 2 2020 RIPE NCC reported its first seizure of IPv4 registration rights pursuant to a Dutch court order Pursuant to the order RIPE NCC effectuated a transfer of the IP Addresses from the liquidating debtor to its creditor Although these IP Addresses could not be owned they were apparently not legacy Thus they conferred no property rights the registration rights ..
Every Internet Governance Stakeholder Has a Role to Play in the Online Health Debate (2020-10-26) Note: This article is part of the preparation for Workshop #116 of the UNs 2020 Internet Governance Forum: Pandemics Access to Medicines: A 2020 Assessment which will take place in November 16 from 14:00 to 15:30 UTC Please find more information HERE Much has been discussed in relation to the impacts of the COVID-19 pandemic on most aspects of human life but the dialogue around the long-term reper..