Lompat ke konten Lompat ke sidebar Lompat ke footer

Does Google Chrome Browser Support Http Uploads

Introduction to HTTP/2

— Updated

Surma

HTTP/2 will make our applications faster, simpler, and more than robust — a rare combination — by allowing us to undo many of the HTTP/ane.1 workarounds previously done within our applications and address these concerns within the ship layer itself. Even ameliorate, it too opens up a number of entirely new opportunities to optimize our applications and amend operation!

The primary goals for HTTP/2 are to reduce latency past enabling full request and response multiplexing, minimize protocol overhead via efficient compression of HTTP header fields, and add together support for request prioritization and server push. To implement these requirements, there is a big supporting bandage of other protocol enhancements, such equally new menses control, error treatment, and upgrade mechanisms, just these are the most of import features that every spider web developer should sympathize and leverage in their applications.

HTTP/2 does not modify the application semantics of HTTP in any fashion. All the core concepts, such as HTTP methods, status codes, URIs, and header fields, remain in place. Instead, HTTP/ii modifies how the data is formatted (framed) and transported between the customer and server, both of which manage the entire process, and hides all the complexity from our applications inside the new framing layer. As a result, all existing applications can be delivered without modification.

Why not HTTP/1.two?_ #

To achieve the functioning goals set by the HTTP Working Group, HTTP/2 introduces a new binary framing layer that is non backward compatible with previous HTTP/1.x servers and clients—hence the major protocol version increment to HTTP/2.

That said, unless you are implementing a spider web server (or a custom client) by working with raw TCP sockets, then y'all won't see any divergence: all the new, low-level framing is performed by the client and server on your behalf. The only appreciable differences volition exist improved performance and availability of new capabilities similar asking prioritization, flow control, and server push.

A brief history of SPDY and HTTP/2 #

SPDY was an experimental protocol, developed at Google and announced in mid 2009, whose primary goal was to endeavour to reduce the load latency of web pages by addressing some of the well-known functioning limitations of HTTP/1.1. Specifically, the outlined project goals were set as follows:

  • Target a 50% reduction in page load fourth dimension (PLT).
  • Avert the need for any changes to content past website authors.
  • Minimize deployment complexity, and avoid changes in network infrastructure.
  • Develop this new protocol in partnership with the open-source customs.
  • Gather existent operation data to (in)validate the experimental protocol.

Not long after the initial announcement, Mike Belshe and Roberto Peon, both software engineers at Google, shared their first results, documentation, and source code for the experimental implementation of the new SPDY protocol:

So far we take but tested SPDY in lab weather. The initial results are very encouraging: when we download the top 25 websites over fake home network connections, we see a significant improvement in performance—pages loaded upward to 55% faster. (Chromium Blog)

Fast-forward to 2012 and the new experimental protocol was supported in Chrome, Firefox, and Opera, and a quickly growing number of sites, both large (for example, Google, Twitter, Facebook) and small, were deploying SPDY within their infrastructure. In result, SPDY was on track to go a de facto standard through growing industry adoption.

Observing this trend, the HTTP Working Grouping (HTTP-WG) kicked off a new endeavour to take the lessons learned from SPDY, build and improve on them, and evangelize an official "HTTP/2" standard. A new charter was drafted, an open call for HTTP/ii proposals was fabricated, and after a lot of discussion within the working group, the SPDY specification was adopted as a starting point for the new HTTP/2 protocol.

Over the adjacent few years SPDY and HTTP/2 connected to coevolve in parallel, with SPDY interim as an experimental branch that was used to exam new features and proposals for the HTTP/2 standard. What looks good on paper may non piece of work in practice, and vice versa, and SPDY offered a road to test and evaluate each proposal before its inclusion in the HTTP/2 standard. In the end, this process spanned 3 years and resulted in a over a dozen intermediate drafts:

  • March 2012: Call for proposals for HTTP/ii
  • Nov 2012: First draft of HTTP/2 (based on SPDY)
  • August 2014: HTTP/two draft-17 and HPACK draft-12 are published
  • Baronial 2014: Working Group last call for HTTP/two
  • February 2015: IESG approved HTTP/2 and HPACK drafts
  • May 2015: RFC 7540 (HTTP/ii) and RFC 7541 (HPACK) are published

In early 2015 the IESG reviewed and canonical the new HTTP/ii standard for publication. Shortly afterwards that, the Google Chrome team announced their schedule to deprecate SPDY and NPN extension for TLS:

HTTP/two's primary changes from HTTP/1.1 focus on improved performance. Some cardinal features such every bit multiplexing, header compression, prioritization and protocol negotiation evolved from work done in an earlier open, merely non-standard protocol named SPDY. Chrome has supported SPDY since Chrome vi, just since most of the benefits are present in HTTP/2, information technology's time to say goodbye. We plan to remove support for SPDY in early 2016, and to as well remove back up for the TLS extension named NPN in favor of ALPN in Chrome at the same time. Server developers are strongly encouraged to move to HTTP/2 and ALPN.

We're happy to have contributed to the open standards process that led to HTTP/2, and hope to come across wide adoption given the wide manufacture appointment on standardization and implementation. (Chromium Blog)

The coevolution of SPDY and HTTP/2 enabled server, browser, and site developers to proceeds real-world feel with the new protocol equally it was existence adult. As a result, the HTTP/ii standard is one of the best and nigh extensively tested standards right out of the gate. By the time HTTP/2 was approved past the IESG, in that location were dozens of thoroughly tested and product-ready client and server implementations. In fact, just weeks after the last protocol was canonical, many users were already enjoying its benefits as several pop browsers (and many sites) deployed full HTTP/ii back up.

Design and technical goals #

Earlier versions of the HTTP protocol were intentionally designed for simplicity of implementation: HTTP/0.9 was a i-line protocol to bootstrap the World wide web; HTTP/1.0 documented the popular extensions to HTTP/0.9 in an informational standard; HTTP/1.ane introduced an official IETF standard; see Brief History of HTTP. Every bit such, HTTP/0.9-1.ten delivered exactly what information technology set out to do: HTTP is one of the most widely adopted application protocols on the Internet.

Unfortunately, implementation simplicity likewise came at a cost of application performance: HTTP/ane.x clients demand to utilize multiple connections to accomplish concurrency and reduce latency; HTTP/1.x does not compress request and response headers, causing unnecessary network traffic; HTTP/i.x does non allow effective resource prioritization, resulting in poor use of the underlying TCP connection; and so on.

These limitations were not fatal, simply equally the web applications connected to grow in their scope, complication, and importance in our everyday lives, they imposed a growing burden on both the developers and users of the web, which is the exact gap that HTTP/two was designed to accost:

HTTP/two enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent exchanges on the same connection… Specifically, it allows interleaving of request and response messages on the same connectedness and uses an efficient coding for HTTP header fields. It also allows prioritization of requests, letting more than important requests complete more than quickly, farther improving performance.

The resulting protocol is more friendly to the network, because fewer TCP connections tin be used in comparison to HTTP/ane.ten. This means less competition with other flows, and longer-lived connections, which in turn leads to better utilization of available network capacity. Finally, HTTP/ii also enables more efficient processing of messages through utilize of binary message framing. (Hypertext Transfer Protocol version 2, Draft 17)

It is important to note that HTTP/2 is extending, non replacing, the previous HTTP standards. The application semantics of HTTP are the same, and no changes were made to the offered functionality or core concepts such as HTTP methods, status codes, URIs, and header fields. These changes were explicitly out of scope for the HTTP/ii effort. That said, while the loftier-level API remains the same, it is important to understand how the depression-level changes address the performance limitations of the previous protocols. Permit's accept a cursory tour of the binary framing layer and its features.

Binary framing layer #

At the cadre of all performance enhancements of HTTP/2 is the new binary framing layer, which dictates how the HTTP messages are encapsulated and transferred between the client and server.

HTTP/2 binary framing layer

The "layer" refers to a design choice to introduce a new optimized encoding mechanism between the socket interface and the higher HTTP API exposed to our applications: the HTTP semantics, such as verbs, methods, and headers, are unaffected, but the way they are encoded while in transit is dissimilar. Unlike the newline delimited plaintext HTTP/1.x protocol, all HTTP/2 communication is dissever into smaller messages and frames, each of which is encoded in binary format.

As a issue, both customer and server must employ the new binary encoding mechanism to understand each other: an HTTP/i.ten customer won't sympathize an HTTP/ii only server, and vice versa. Thankfully, our applications remain blissfully unaware of all these changes, equally the client and server perform all the necessary framing work on our behalf.

Streams, letters, and frames #

The introduction of the new binary framing machinery changes how the data is exchanged between the client and server. To describe this process, allow'southward familiarize ourselves with the HTTP/2 terminology:

  • Stream: A bidirectional menses of bytes within an established connection, which may deport one or more messages.
  • Message: A consummate sequence of frames that map to a logical request or response message.
  • Frame: The smallest unit of communication in HTTP/2, each containing a frame header, which at a minimum identifies the stream to which the frame belongs.

The relation of these terms can be summarized as follows:

  • All communication is performed over a single TCP connectedness that can carry any number of bidirectional streams.
  • Each stream has a unique identifier and optional priority information that is used to carry bidirectional messages.
  • Each bulletin is a logical HTTP message, such every bit a request, or response, which consists of i or more than frames.
  • The frame is the smallest unit of communication that carries a specific blazon of data—e.chiliad., HTTP headers, message payload, and then on. Frames from different streams may be interleaved and then reassembled via the embedded stream identifier in the header of each frame.
HTTP/2 streams, messages, and frames

In short, HTTP/2 breaks down the HTTP protocol advice into an exchange of binary-encoded frames, which are then mapped to letters that belong to a item stream, all of which are multiplexed within a single TCP connection. This is the foundation that enables all other features and performance optimizations provided by the HTTP/2 protocol.

Request and response multiplexing #

With HTTP/i.ten, if the client wants to make multiple parallel requests to amend performance, and so multiple TCP connections must be used (come across Using Multiple TCP Connections ). This behavior is a direct upshot of the HTTP/1.ten commitment model, which ensures that simply one response can be delivered at a time (response queuing) per connection. Worse, this also results in head-of-line blocking and inefficient use of the underlying TCP connection.

The new binary framing layer in HTTP/2 removes these limitations, and enables total request and response multiplexing, by assuasive the customer and server to intermission downward an HTTP bulletin into independent frames, interleave them, and then reassemble them on the other end.

HTTP/2 request and response multiplexing within a shared connection

The snapshot captures multiple streams in flight within the aforementioned connection. The client is transmitting a Information frame (stream v) to the server, while the server is transmitting an interleaved sequence of frames to the client for streams 1 and 3. As a effect, at that place are three parallel streams in flight.

The ability to break downwardly an HTTP message into contained frames, interleave them, and then reassemble them on the other end is the single nigh of import enhancement of HTTP/2. In fact, it introduces a ripple effect of numerous performance benefits across the entire stack of all web technologies, enabling us to:

  • Interleave multiple requests in parallel without blocking on any ane.
  • Interleave multiple responses in parallel without blocking on whatsoever 1.
  • Use a unmarried connection to deliver multiple requests and responses in parallel.
  • Remove unnecessary HTTP/ane.x workarounds (see Optimizing for HTTP/ane.10, such as concatenated files, image sprites, and domain sharding).
  • Deliver lower page load times by eliminating unnecessary latency and improving utilization of bachelor network capacity.
  • And much more…

The new binary framing layer in HTTP/two resolves the head-of-line blocking problem constitute in HTTP/ane.ten and eliminates the need for multiple connections to enable parallel processing and commitment of requests and responses. Every bit a result, this makes our applications faster, simpler, and cheaper to deploy.

Stream prioritization #

Once an HTTP message tin exist dissever into many individual frames, and we allow for frames from multiple streams to be multiplexed, the gild in which the frames are interleaved and delivered both by the customer and server becomes a critical functioning consideration. To facilitate this, the HTTP/ii standard allows each stream to take an associated weight and dependency:

  • Each stream may be assigned an integer weight betwixt 1 and 256.
  • Each stream may be given an explicit dependency on some other stream.

The combination of stream dependencies and weights allows the client to construct and communicate a "prioritization tree" that expresses how it would prefer to receive responses. In plough, the server can utilize this information to prioritize stream processing past controlling the allocation of CPU, retention, and other resource, and one time the response data is available, allocation of bandwidth to ensure optimal delivery of high-priority responses to the client.

HTTP/2 stream dependencies and weights

A stream dependency within HTTP/ii is declared past referencing the unique identifier of another stream as its parent; if the identifier is omitted the stream is said to be dependent on the "root stream". Declaring a stream dependency indicates that, if possible, the parent stream should be allocated resources ahead of its dependencies. In other words, "Please process and deliver response D before response C".

Streams that share the aforementioned parent (in other words, sibling streams) should exist allocated resource in proportion to their weight. For example, if stream A has a weight of 12 and its one sibling B has a weight of 4, so to decide the proportion of the resources that each of these streams should receive:

  1. Sum all the weights: iv + 12 = 16
  2. Carve up each stream weight by the total weight: A = 12/16, B = 4/16

Thus, stream A should receive 3-quarters and stream B should receive one- quarter of available resources; stream B should receive one-tertiary of the resources allocated to stream A. Allow'due south work through a few more hands-on examples in the image above. From left to right:

  1. Neither stream A nor B specifies a parent dependency and are said to be dependent on the implicit "root stream"; A has a weight of 12, and B has a weight of 4. Thus, based on proportional weights: stream B should receive one-third of the resource allocated to stream A.
  2. Stream D is dependent on the root stream; C is dependent on D. Thus, D should receive full allocation of resources ahead of C. The weights are inconsequential considering C's dependency communicates a stronger preference.
  3. Stream D should receive full resource allotment of resources alee of C; C should receive full resource allotment of resources alee of A and B; stream B should receive 1-3rd of the resources allocated to stream A.
  4. Stream D should receive full allocation of resource ahead of E and C; Eastward and C should receive equal allocation ahead of A and B; A and B should receive proportional allocation based on their weights.

As the above examples illustrate, the combination of stream dependencies and weights provides an expressive language for resource prioritization, which is a critical feature for improving browsing operation where we take many resource types with unlike dependencies and weights. Even better, the HTTP/2 protocol besides allows the client to update these preferences at whatsoever indicate, which enables farther optimizations in the browser. In other words, we can change dependencies and reallocate weights in response to user interaction and other signals.

One connection per origin #

With the new binary framing mechanism in place, HTTP/2 no longer needs multiple TCP connections to multiplex streams in parallel; each stream is split into many frames, which can exist interleaved and prioritized. As a issue, all HTTP/2 connections are persistent, and merely one connection per origin is required, which offers numerous performance benefits.

For both SPDY and HTTP/2 the killer feature is capricious multiplexing on a single well congestion controlled channel. Information technology amazes me how important this is and how well information technology works. I great metric around that which I savour is the fraction of connections created that carry just a unmarried HTTP transaction (and thus make that transaction bear all the overhead). For HTTP/i 74% of our agile connections carry just a single transaction—persistent connections just aren't every bit helpful as we all desire. But in HTTP/two that number plummets to 25%. That'due south a huge win for overhead reduction. (HTTP/2 is Live in Firefox, Patrick McManus)

Most HTTP transfers are curt and bursty, whereas TCP is optimized for long- lived, majority data transfers. Past reusing the same connection, HTTP/2 is able to both make more efficient use of each TCP connexion, and as well significantly reduce the overall protocol overhead. Farther, the utilise of fewer connections reduces the memory and processing footprint along the total connexion path (in other words, customer, intermediaries, and origin servers). This reduces the overall operational costs and improves network utilization and capacity. Every bit a outcome, the move to HTTP/ii should not only reduce network latency, but also assistance amend throughput and reduce the operational costs.

Menses command #

Flow command is a mechanism to prevent the sender from overwhelming the receiver with data information technology may not want or be able to process: the receiver may be busy, under heavy load, or may only exist willing to allocate a fixed amount of resources for a detail stream. For example, the client may have requested a large video stream with high priority, simply the user has paused the video and the client now wants to pause or throttle its delivery from the server to avoid fetching and buffering unnecessary data. Alternatively, a proxy server may have fast downstream and slow upstream connections and similarly wants to regulate how speedily the downstream delivers information to match the speed of upstream to control its resource usage; and so on.

Practice the above requirements remind you of TCP flow control? They should, every bit the trouble is finer identical (see Flow Control). However, considering the HTTP/2 streams are multiplexed inside a unmarried TCP connection, TCP flow control is both non granular enough, and does not provide the necessary application-level APIs to regulate the delivery of individual streams. To address this, HTTP/2 provides a ready of unproblematic building blocks that allow the client and server to implement their ain stream- and connection-level flow command:

  • Menstruum control is directional. Each receiver may choose to ready whatsoever window size that it desires for each stream and the entire connexion.
  • Flow control is credit-based. Each receiver advertises its initial connexion and stream menstruum command window (in bytes), which is reduced whenever the sender emits a Information frame and incremented via a WINDOW_UPDATE frame sent past the receiver.
  • Flow control cannot be disabled. When the HTTP/2 connection is established the customer and server exchange SETTINGS frames, which prepare the flow command window sizes in both directions. The default value of the period control window is fix to 65,535 bytes, just the receiver tin can set a large maximum window size (2^31-one bytes) and maintain it by sending a WINDOW_UPDATE frame whenever any information is received.
  • Flow control is hop-by-hop, non end-to-end. That is, an intermediary can utilize it to control resources use and implement resources allotment mechanisms based on ain criteria and heuristics.

HTTP/ii does not specify any item algorithm for implementing flow control. Instead, it provides the simple edifice blocks and defers the implementation to the customer and server, which can use it to implement custom strategies to regulate resource employ and allocation, equally well as implement new commitment capabilities that may help amend both the existent and perceived operation (meet Speed, Performance, and Human Perception) of our web applications.

For example, application-layer flow control allows the browser to fetch only a role of a particular resource, put the fetch on hold by reducing the stream flow control window downward to zero, and then resume it later. In other words, it allows the browser to fetch a preview or starting time scan of an image, display information technology and allow other high priority fetches to continue, and resume the fetch in one case more critical resources have finished loading.

Server push #

Some other powerful new feature of HTTP/two is the ability of the server to send multiple responses for a single customer request. That is, in addition to the response to the original request, the server can push button boosted resources to the customer (Figure 12-5), without the client having to request each i explicitly.

Server initiates new streams (promises) for push resources

Why would we need such a machinery in a browser? A typical spider web application consists of dozens of resource, all of which are discovered by the client past examining the document provided by the server. As a result, why not eliminate the extra latency and permit the server push the associated resources ahead of time? The server already knows which resources the client will require; that's server push button.

In fact, if you have always inlined a CSS, JavaScript, or whatever other asset via a information URI (see Resources Inlining), then you already have easily-on experience with server button. By manually inlining the resource into the document, we are, in result, pushing that resource to the customer, without waiting for the client to asking information technology. With HTTP/2 we can achieve the same results, but with additional performance benefits. Push resources tin can exist:

  • Buried by the client
  • Reused across different pages
  • Multiplexed alongside other resources
  • Prioritized by the server
  • Declined past the customer

PUSH_PROMISE 101 #

All server push streams are initiated via PUSH_PROMISE frames, which signal the server's intent to push the described resources to the client and need to be delivered ahead of the response information that requests the pushed resources. This commitment order is critical: the client needs to know which resources the server intends to button to avoid creating duplicate requests for these resources. The simplest strategy to satisfy this requirement is to send all PUSH_PROMISE frames, which contain merely the HTTP headers of the promised resource, alee of the parent's response (in other words, Data frames).

One time the client receives a PUSH_PROMISE frame it has the option to decline the stream (via a RST_STREAM frame) if information technology wants to. (This might occur for example considering the resource is already in cache.) This is an of import improvement over HTTP/i.ten. By contrast, the use of resource inlining, which is a popular "optimization" for HTTP/one.x, is equivalent to a "forced push": the client cannot opt-out, abolish it, or process the inlined resource individually.

With HTTP/two the client remains in full control of how server push is used. The customer can limit the number of concurrently pushed streams; adjust the initial flow command window to control how much data is pushed when the stream is first opened; or disable server button entirely. These preferences are communicated via the SETTINGS frames at the get-go of the HTTP/2 connection and may be updated at whatsoever fourth dimension.

Each pushed resource is a stream that, unlike an inlined resource, allows it to be individually multiplexed, prioritized, and candy by the client. The just security brake, as enforced by the browser, is that pushed resources must obey the same-origin policy: the server must be authoritative for the provided content.

Header compression #

Each HTTP transfer carries a set of headers that describe the transferred resources and its properties. In HTTP/1.x, this metadata is always sent as apparently text and adds anywhere from 500–800 bytes of overhead per transfer, and sometimes kilobytes more than if HTTP cookies are beingness used. (See Measuring and Decision-making Protocol Overhead .) To reduce this overhead and improve performance, HTTP/2 compresses request and response header metadata using the HPACK compression format that uses 2 simple but powerful techniques:

  1. It allows the transmitted header fields to be encoded via a static Huffman code, which reduces their individual transfer size.
  2. It requires that both the client and server maintain and update an indexed listing of previously seen header fields (in other words, it establishes a shared compression context), which is then used as a reference to efficiently encode previously transmitted values.

Huffman coding allows the individual values to be compressed when transferred, and the indexed listing of previously transferred values allows us to encode duplicate values past transferring index values that tin can be used to efficiently look up and reconstruct the full header keys and values.

HPACK: Header Compression for HTTP/2

Every bit 1 further optimization, the HPACK compression context consists of a static and dynamic table: the static table is defined in the specification and provides a list of common HTTP header fields that all connections are probable to apply (e.g., valid header names); the dynamic tabular array is initially empty and is updated based on exchanged values within a particular connexion. As a result, the size of each request is reduced by using static Huffman coding for values that haven't been seen before, and substitution of indexes for values that are already nowadays in the static or dynamic tables on each side.

Security and operation of HPACK #

Early on versions of HTTP/ii and SPDY used zlib, with a custom dictionary, to compress all HTTP headers. This delivered an 85% to 88% reduction in the size of the transferred header data, and a significant improvement in page load time latency:

On the lower-bandwidth DSL link, in which the upload link is only 375 Kbps, request header compression in particular, led to significant folio load time improvements for certain sites (in other words, those that issued big number of resources requests). We found a reduction of 45–1142 ms in page load time simply due to header pinch. (SPDY whitepaper, chromium.org)

All the same, in the summer of 2012, a "Offense" security attack was published against TLS and SPDY compression algorithms, which could result in session hijacking. As a result, the zlib compression algorithm was replaced by HPACK, which was specifically designed to: address the discovered security issues, be efficient and uncomplicated to implement correctly, and of class, enable good compression of HTTP header metadata.

For full details of the HPACK compression algorithm, see IETF HPACK - Header Compression for HTTP/2.

Farther reading #

  • "HTTP/2" – The full article by Ilya Grigorik
  • "Setting up HTTP/ii" – How to set up HTTP/2 in different backends by Surma
  • "HTTP/2 is hither, let'south optimize!" – Presentation by Ilya Grigorik from Velocity 2015
  • "Rules of Thumb for HTTP/2 Push button" – An analysis past Tom Bergan, Simon Pelchat and Michael Buettner on when and how to utilize button.

Feedback #

Terminal updated: — Meliorate article

carlsonpuntionve.blogspot.com

Source: https://developers.google.com/web/fundamentals/performance/http2

Posting Komentar untuk "Does Google Chrome Browser Support Http Uploads"