[Info-vax] HTTP/2
Jan-Erik Soderholm
jan-erik.soderholm at telia.com
Sun Jun 7 07:25:18 EDT 2015
johnwallace4 at yahoo.co.uk skrev den 2015-06-07 10:42:
> On Sunday, 7 June 2015 09:14:24 UTC+1, Dirk Munk wrote:
>> Stephen Hoffman wrote:
>>>
>>> ps/btw/fwiw: for those pondering the available implementations and the
>>> compatibility of web browsers and of web servers, there are some HTTP
>>> changes underway, with HTTP/2:
>>>
>>> http://chimera.labs.oreilly.com/books/1230000000545/ch12.html
>>>
>>
>> Thanks for pointing us at that page. I had been reading about this a few
>> weeks ago, and this is a big improvement indeed. I had been thinking
>> about something like that for years, in very vague terms that is. It
>> always appeared to me how very inefficient HTTP is in transmitting a web
>> page. For every item on the page a new connection is made, a waste of
>> resources, and very time consuming. I always wondered if it wouldn't
>> been possible to zip the whole page at the server and send it in one
>> stream. It seems the push mechanism og HTTP/2 is doing something
>> similar, inspecting the page for links to other objects, and including
>> those objects in the stream before the client asks for them.
>
> Hmmm, looking briefly at the book, I can't help wondering if
> this is a more formalised, more structured, and probably somewhat
> extended, version of what Opera Mini (Mobile?) have been doing
> for years. As have other caching/compressing web proxies?
>
> If I were sufficiently interested I'd also want to understand how
> http/2 works where the transfers are intended to be secure (https?).
> Encrypted stuff tends to be difficult to interpret (e.g. can't
> see links) and to compress, by anything other than the original
> server(s)?
But the server would check for additional data *before* the
encryption layer takes over, I guess.
I do not know what the "original server" is. There is usualy
only one "server" processing the browser request.
Another issue is that in many cases, in particularly at larger
complex sites like eBay or similar, there are different root
URL's for application script and other data such as icons,
images or other data to be displayed.
And then the user can have disabled download of some types
of content, and the server can harldy know about that.
And finaly, the browser can have most of the additional
contents cached, so no extra network transfer will be
done at all anyway.
I'm not convinces that *that* part of HTTP/2 will be a major
benefit.
The two major points are, IMHO, compression and keeping a
single TCPIP connection between the client and server.
>
> Not that Google and friends would be interested in being able to
> access anyone's encrypted webstuff as easily as they can the rest.
>
> Want faster loading of web pages? Consider using an ad blocker,
> script blocker, Flash blocker, etc. Or use Lynx :)
>
More information about the Info-vax
mailing list