Merging Policy to Optimize the Multicasting Delivery Scheme (cid:1)

. The advance of Internet 2 and the proliferation of switches and routers with level three functionalities made the multicast one of the most feasible video streaming delivering techniques for the near future. Assuming this to be true, this study addressed the over-load situation that a streaming server could suﬀer due to client requests. As a solution, we proposed new multicast delivery scheme that allows every active client to collaborate with the server regardless of the video that they are watching, alleviating server loads, and therefore server resource requirements. The solution combined the multicast delivery scheme and client-side buﬀer collaboration in order to decentralize the delivery process. The new video delivering scheme was designed as two separate policies: the ﬁrst policy used client collaboration to deliver ﬁrst part of videos and the second policy could merge two or more multicast channels using distributed collaboration between a group of clients. Experimental results show that this scheme is better than previous schemes in terms of resource requirements and scalability.


Introduction
The high increase in the commercial use of the Internet (distance learning, Video on Demand (VoD) and digital video libraries) has generated a substantial growth in the demand for video streaming systems. In video streaming environments, users request the videos they desire and a server delivers the requested video information; allocating, using the most simple delivery technique, a dedicated server unicast channel for each video request. Even though the unicast delivery scheme is easy to implement, it is excessively expensive and there is a lack of scalability.
In order to reduce the cost of video-delivery and attain high server scalability, three complementary research approaches have been investigated: (1) server transmission schemes using multicast, this strategy allows users to share server and network bandwidth to reduce the individual service cost; (2) video streaming technique with application layer multicast enables multicast transmission schemes beyond a local area network, assuming only IP unicast at the network layer; and (3) proxy caching [6], enabling high scalability for clients dispersed across a wide-area. The main focus of this study is the design of multicast delivery in order to reduce the individual service cost, specially, we proposed a delivery scheme that is able to offer true VoD services [10].
With a Batching technique, video requests for the same video that are submitted in the same short interval time are served by a single multicast channel. Clients suffer a certain period of waiting time and the average length of waiting time depends on the policy of selecting the clients to serve with the first available channel. Due to this waiting time, a Batching approach only provides near-VoD service. A Batching approach is also called static multicast since late coming requests are not allowed to join any already on-going multicast channel. With Patching, however, clients are dynamically assigned to join multicast channels. Since late coming clients miss part of the video information, a separate unicast channel, called a patch stream, is needed to deliver the first part of the video. The Patching approach assumes that clients can simultaneously download two streams and has a local buffer, capable of saving t minutes of video. While a client is watching a video from the patch stream, the video information arriving from multicast channels is buffered. Even though the Patching policy provides true-VoD service, the server resource requirement increases depending on the request arrival frequency due to unicast channels. Furthermore, a request is only able to join a multicast channel if the difference between the request arrival time and the multicast channel start-time is lower than t.
Like Patching, Adaptive Piggybacking and Merging are also dynamic multicast approaches. In the Piggybacking policy, the server slows down and speeds up the delivery rate of two consecutive multicast channels in order to merge two multicast channels into one. The number of channels that Piggybacking can merge is limited by the fact that less than 5% adjustment of the delivery rate is allowed, in order to preserve the display quality that clients receive. The Merging policy, however, does not change the display quality. Two multicast channels are merged using the client buffer. In the Merging policy, while clients are playing the video, they try to buffer video information from a previous multicast channel. This policy can only merge channels that are started in a period of time no longer than the length of video information that each client is able to save in their buffer.
The main ideas behind Chaining and CVC are fairly similar. Both policies are based on the creation of a delivery chain in which video information is forwarded from one client to another. With these policies, a new client receives the video from an existing chain and does not consume any server bandwidth. However, delivery chains could only be formed if the interarrival times of client requests are short, limited by the size of each individual client's buffer. Furthermore, only clients that are watching the same video can take part in the formation of the chain.
In this paper, we propose a new delivery technique called Dynamic Distributed Collaborative Merging (DDCM). The DDCM technique is based on the peer-to-peer paradigm and allows every active client to collaborate with the server regardless of the video that they are watching. The client collaborations are performed by two complementary delivery policies. Under the first policy, while successive incoming requests are allowed to join an existing multicast channel, the missed video information (patch stream) is delivered by another client who is playing the same video. Unlike the Patching policy, the patch stream does not consume the server's resources. The aim of the second policy is to dynamically merge multicast channels using distributed buffers. More than one client of different videos could be used in the merging process of two channels. The merge policy is able to merge multicast channels regardless of the time between their start-times. The merge policy enables clients of unpopular videos to help the server to merge the channels of more popular videos, and vice versa.
The rest of the paper is organized as follows: in section 2, we show the key ideas behind DDCM. Performance evaluation is shown in section 3. In section 4, we indicate the main conclusions of our results and future work is explained in the final section.

Dynamic Distributed Collaborative Merging Scheme
In the delivery scheme design, we assume that clients are able to hold two symmetric channels. We assume that video information is encoded with Constant Bitrate (CBR) and that each client channel is able to receive/send one video stream. We refer to network unicast channel that delivers the first part of a video as patch stream and the multicast channel that delivers the information for the complete video as complete stream.
The DDCM delivery scheme is designed as two separate policies: 1) Patch Stream Manager (PSM) whose main role is to deliver patch streams using client collaborations. 2) Complete Stream Manager (CSM). The main function of this second policy is to try to merge two or more complete streams into one.

Patch Stream Manager Design
When the first request from client C i arrives time t i , the server opens a new complete stream(M 1) for the client. When a second request from client C i+1 arrives in time t i+1 , the server decides whether or not the client can be served by using a previous complete stream (M 1). In order to serve by using a previous complete stream, client(C i+1 ) must have enough buffer to save more than (t i+1 − t i ) seconds of video information from the complete stream. If not, the server will open a new complete channel. In the other case, a patch stream is needed to send video information from 0 to (t i+1 − t i ). The remaining video information ((t i+1 − t i ) to the end) will be sent by the previous complete stream.
For each new patch stream, the PSM policy searches for an active client that has the first part of the video in their local buffer. In such a case, a collaborative client will open a patch stream and send the first part of the video. Client C i+1 will also join the previous multicast complete stream. Should there be no such client, the server starts a new patch stream using server bandwidth. Fig 1 shows the delivery process following PSM for 6 clients. Each client has 3 minutes of buffer and client requests arrive at minutes 1, 2, 3, 5, 6, and 7. Under the clients name, we indicate the length of buffer that each client is dedicated for the collaboration. For example, C3's collaboration buffer is 1 minute, while C2 collaborates with a buffer of 2 minutes. These values depend on the length of the patch stream that the client needs for the delivery process. In the case of C1, no patch stream is needed, so the full buffer (3 minutes) is used for collaboration. In order to know the buffer size that each client dedicates to collaboration, each client sends a control message to the server when the client has filled the buffer with the first part of video.  1 shows that C2 and C3 are served using one multicast channel and 2 patch streams using server bandwidth. In the case of C5 and C6, the patch streams are delivered by C3 and C2 respectively. As we can see in Fig 1, the PSM policy is capable to deliver patch streams without consuming the server's bandwidth after minute 5.
The more clients are accepted by the server, the more client collaborations will be produced with PSM policy. This characteristic makes the PSM especially suitable as a delivery scheme for highly demanded video where a lot of patch streams are needed. However, after several minutes, the server has more than one client that is able to serve the same patch stream. This redundancy implies poor client resource utilization since many of clients will not be involved in the collaboration mechanism of PSM.

Complete Stream Manager Design
The CSM's aim is to merge the existing complete streams. Once a complete stream is merged into another, the complete stream will not consume any server resources. Since complete streams are usually long, and therefore demand most of the server's resources, the CSM efficiently replaces the server resource demand with client collaborations. The CSM scheme achieves a high degree of client resource utilization since almost every client is involved in the collaboration mechanism regardless of the video that they are watching.
Given two multicast channels (M1 and M2), the key idea of CSM is that a group of clients form a collaborative buffer to merge M2 with M1. Then, the multicasting channel (M 2) from the server is replaced by a channel (M 2 1 ) from the collaborative clients. Since more than one client could be used in the merging process, the CSM is able to merge multicast channels regardless of the time between their start-times. Fig 2 shows the collaborative buffer created between clients C1, C2, C3 and C6 that collaborate by providing 3, 2, 1 and 1 minutes of buffer respectively. A total of 7 minutes of video information could be saved in this collaborative buffer. Once the channel M 2 is merged with M 1, the CSM has to guarantee that while a client is delivering video blocks, there are enough other clients that are saving other video blocks being delivered by the server with M 1. Since, each client can only use one stream in the collaboration, either to deliver or to save video information, the two processes (delivery and saving) have to be performed separately. In the case of Fig 3, while C4 is delivering block I, it should not receive any video information except the video that C4 is playing.

Client Collaboration Group Construction Process
In the merging process, two parameters are determined by the CSM: 1)The client collaboration (B Ci ) that is the size of buffer of each client C i that is to be dedicated to the formation of the collaborative buffer. 2) Accumulated buffer size that is the total size of the collaborative buffer. The value of these two parameters is determined under 2 constraints: a) A client cannot use more buffer than it has. b) A client only use one channel in the collaboration process.
Constraint a) is trivial and requires no further explanation. We established two conditions for the CSM group construction process in order to satisfy constraint b). Supposing that the CSM is interested in merging two channels that are separated into S units of time. We can formulate these 2 conditions as follows: Given a collaboration group CG of clients {C 1 , C 2 , ..., C n } 1 in which the buffer collaborations for each client are {B C1 , B C2 , ..., B Cn }, the CSM has to satisfy: 1. Maximum collaboration: the collaboration (B Ci ) of a client C i can not be greater than the value of S.
2. Minimum accumulated buffer size: The total accumulated buffer size(BL) has to be bigger or equal (S + max {B Ci }).
Satisfying conditions (1) and (2), unconditioned by S, we get: The condition (3) indicates that the accumulated buffer size of all groups except for a client C i is always bigger than C i 's collaboration(B Ci ) and is bigger than S. This means that the CSM guarantees that while a client C i is sending video information there are enough other clients saving video information from the earlier multicast channel. Furthermore, while a client C i is saving video information, there are enough other clients sending information. Since a client does not to have save information while it is sending, or vice versa, the CSM guarantees that in the collaboration process, each client will not use up more than one channel, leaving another one for playback. In this way, the CSM constructs the collaboration group in accordance with the following steps: Step 1: The CSM calculates S of every pair of channels which could be merged and chooses the pair with the smallest S as channels to be merged.
Step 2: Satisfying conditions (2), the DDCM forms a list of clients {C 1 , C 2 , ..., C n }. In this step, the maximum collaboration of each client C i is limited by the condition (1).
Step 3: Blocks of video (V b j ) that a client C i has to save and deliver are determined by: Ci (the total accumulated size of collaborative buffer), B Ci is the collaboration of client C i and StartBlock is the block number which indicates the starting point of the merging process.

Performance Evaluation
We have used our prototype to evaluate the performance of the DDCM. There are three key questions that we are interested in addressing: 1) how much reduction in server bandwidth could be achieved using DDCM in accordance with the video's popularity? 2) How much server bandwidth is required using DDCM when the system is offering more than one video? 3) How could the client collaboration following the DDCM scheme help in a high-demand situation?
The DDCM is implemented in our prototype using C++ language under Linux system. We have implemented the entire necessary client feature in a Xine player plug-in[9]. In the experimentation, clients are emulated using a cluster of PCs and client requests are generated following a Poisson(P k = λ k k! · e −λ )process. The Zips-like(P ) distribution is used in order to assign the popularity of videos. We assume that the video length is 90 minutes and clients is able to save up to 5 minutes of video information.  policy is determined by results from [5]. We should point out that no buffer constraint is considered in Merging policy and, in a real case scenario, the Merging policy could only merge two streams separated by no more than client buffer length, so the performance will not be as good. The key observations from Fig  4 are: 1) Using Patching policy, the bandwidth requirement increases with more requests. This makes the Patching policy unsuitable for a high demand video service.

Server Bandwidth Requirement According to Video Popularity
2) Under PSM, clients can collaborate with the server to deliver patch streams. Regardless of the interarrival time, the server does not need any more than 18 streams to serve a video. This makes the PSM more suitable than a Patching policy for serving popular videos.
3) The main virtue of the DDCM(PSM+CSM) could be summarised as, 'more request less server bandwidth'. As we can see in Fig 4, the service bandwidth requirement of the DDCM increases up to 12 streams. Up to this point, there are not so many client resources that can be used to merge complete streams. As soon as a critical mass of client resources are collected, the CSM tries to merge consecutive streams and the bandwidth requirement drastically drops to 1-2 streams per video. Compared with a Patching policy, the DDCM(PSM+CSM) is able to achieve a resource reduction of 73% (4 vs. 15 streams) if there are 30 requests during 90 minutes (one request per 3 minutes). Reduction of 92.5% (2 vs. 27 streams) is achieved when there are 360 requests. Comparing with Merging policy, the DDCM(PSM+CSM) does not reduce the required resource until 30 requests. However, with 180-720 requests, the Merging policy gets closer to PSM (10-14 streams) and the DDCM(PSM+CSM) reduces the bandwidth consumption up to 85.71% (2 vs. 14 streams).

Service Bandwidth Requirement for Multiple Videos
In order to measure the bandwidth requirement of a server that is offering more than one video, we suppose that the catalog is 30 to 550 videos. We consider that the number of requests that arrive during the video length(90Minutes) is from 90 (low client activity) to 4500 (high client activity).  We also obtained the bandwidth requirement if a client is not able to collaborate with the server to merge the channels that are not delivering the same video as the one that the client is playing. As we can see in Fig 5 a) (DDCM(SameMovie)) the requirement is clearly higher than the DDCM without this restriction. These results justify our delivery policy design. Fig 5 b) shows the server bandwidth requirement according to the size of the catalog. We have supposed that 2700 requests arrive in 90 Minutes. Regardless of the delivery policy, the bandwidth requirement increases in accordance with the number of videos. The DDCM shows a requirement reduction of between 85.23%(30 videos) to 27.77%(550 videos). The DDCM is able to reduce the number of required channels by 300-344.

Circumstantial Workload Variations
In this section we are interested in measuring the server's capacity to face circumstantial workload variations. Suppose the following situation: we are designing a VoD system for 3600 clients and, for most of the time, only 50% of the clients are active. Taking the equipment cost into consideration, the VoD server could be designed for a particular, acceptable blocking probability. Most of times, the server is able to attend to all the client requests(20 requests/minute). However, in special situations, such as the Olympic Games, all 3600 clients may decide to request videos at same time (40 requests/minute). Furthermore, since the most of population is interested in this event, the video's popularity distribution could change, increasing the skew parameter of Zipf-like distribution. Fig 6 shows the requirement variation when twice the number of client requests reach the server. With a Patching policy, the resource requirement increases by 51.58% to 34.74% depending on the skew parameters variation. As the skew fact increases, the resource requirement variation gets lower. With PSM, the resource increases 32.60% when the skew fact increases to 1 from 0.9. In this case, the PSM is 6.1% better than a Patching policy that produces an increase of 34.74%. DDCM policy produces a maximum increase of 16.67% if there is no variation in popularity distribution. In the worst case (skew fact 0.9), the DDCM is 67.68% better than a Patching policy in terms of increase in resource requirement.

Conclusions
We have proposed and evaluated a new video delivery technique called Dynamic Distributed Collaborative Merging that enables clients to efficiently collaborate with VoD servers. With DDCM policy, every client is able to collaborate with server, regardless of the video that they are watching. Instead of independent collaborations between the server and a client, the DDCM synchronizes a group of clients in order to merge multicast channels to achieve a better network efficiency.
Our experimental results show that DDCM has lower resource requirements than Patching policy, achieving reduction up to 92.5%. Offering multiple videos with high client activity, the DDCM is able to reduce the resource requirement up to 64.61%. These results corroborate the high scalability of DDCM when the number of requests is high. The DDCM achieves a more suitable investment in VoD server resources, since the client's punctual variation in the demand is covered by client contributions. Experimental results show that the DDCM is 67.68% better than Patching policy in terms of increase in resource requirement, suggesting that DDCM is more suitable delivery policy for VoD, in which the number of active clients changes over time.

Future Work
In this study, we have not considered the client network load that will suffer due to our policies and more research will be needed. However, we would like to point out that multicast schemes are usually effective in local networks. Fault tolerance is another pending question that should be carefully analyzed.