Environmental conditions, changing locations of mobile hosts and structural changes in a wireless network can easily affect and change the reliability of a system. Nevertheless, changing locations of mobile nodes are considered a major problem that obstructs systems reliability. In this study, the link reliability problem and aim at optimizing the arrival percentage of transmitted data at the receiver’s end. We also develop a probabilistic model that takes into account the fading nature of wireless networks. The model establishes a relationship between the link reliability, the distance between communicating nodes and the transmitted power. By applying this probabilistic model to a multi-hop network, an end-to-end route reliability model is derived and analyzed. The optimum reliability model is then developed and simulated. The simulation provides an improved end-to-end route reliability, taking into consideration the wireless broadcast property and fade state independence between different pairs of nodes. Simulation results show that by adjusting the transmission power, transmission throughput is optimized, thereby increasing the end-to-end reliability.
INTRODUCTION
As mobile technology develops, wireless networks are applied in various applications, providing mobile users ubiquitous and frequent access to computing resources. Wireless networks are more prone to failures and loss of access due to weak transmission power, terrain, interference, etc. Because of these impediments, there has been a paradigm shift from network reliability analysis for wired networks (Aggarwal et al., 1975; Ke and Wang, 1997; Lee and Park, 2001; Shaio, 2002) to wireless networks (Marks et al., 2001). However, reliability issues for wireless networks are completely different from that of wired networks, as wireless networks introduce a unique feature called terminal mobility, in which the types and numbers of engaged components in end-to-end communication is stochastic (Chen and Lyu, 2005).
Reliability is measured as the percentage of products that arrives undamaged on a link. This is reported by the networks interface hardware or firmware and is calculated as a moving average. In Internet Gateway Routing Protocol (IGRP), link reliability of a route equals the minimum link reliability along the path. Through experience and extensive empirical studies, effective estimators have been developed and tuned to the characteristics of particular link technologies and usage models (Woo and Culler, 2003). Incoming packets are always detected and damaged packets are lost. For the wireless scenario, the channel is a broadcast medium and packets can be damaged or totally missed by the receiver and the link error dynamics are expected to be very different at low power.
Motivated by models for the propagation of electromagnetic signals in space, the amount of energy required to establish a link between two nodes is usually assumed to be proportional to the distance between the communicating nodes raised to a constant power. This fixed exponent, referred to as propagation-loss exponent, is usually assumed to be between 2 and 4. In this model it is assumed that the information is received by the intended destination with certainty if the source transmits the information at a minimum power level dictated by its distance to the intended destination. This model is referred to as the deterministic link model (Khandani et al., 2008).
The basic tool for achieving reliability in any type of network is the packet retransmission. A packet is re-sent to the destination if it is not explicitly requested by it. Retransmission policies differ by answering the question of where to transmit. The end-to-end approach used in TCP advocates resending at the two ends of the network. Although, this is very well suited for wired networks with bit error rates of 10-15, it might not be a good option for very unreliable wireless links, which are the norms in wireless sensor networks. Having to retransmit a packet through the full path, even when only one node en-route to the receiver failed to transmit is a potential waste of energy. Moreover, recent studies have shown that the probability of successful packet delivery falls dramatically if an end-to-end retransmission mechanism is used (Clark, 1988; Jacobson, 1988).
An alternative model for the wireless link is based on the probability formulation (Biglieri et al., 1998; Ozarow and Shamai, 1994). In this model, the instantaneous capacity of a wireless link is treated as a random variable. A link is said to be in outage when the instantaneous capacity supported by the link is less than the transmission rate. The reliability of a link, i.e., the probability of correct reception at the receiver is modeled as a function of certain communication parameters. This model is referred to as the probabilistic link model (Khandani et al., 2008).
This study focuses on the reliability of connections in low power wireless networks. To achieve reliability, we model a multi-hop wireless network and concentrate on the optimization of certain parameters that influence the quality of received packet signals.
Related studies: Analyzing the reliability of computer network includes some major difficulties especially when considering wireless networks.
Network reliability can be assessed using either combinatorial or Markov modeling. Combinatorial modeling of networks requires the decomposition of a network into subnets and determining the reliability of the entire system as a combination of the subnets (Cheng and Ibe, 1992; Menezes and Bakhru, 1995). This method is used to evaluate the reliability of shuffle-exchange networks. Markov modeling can also be used as an alternative to combinatorial modeling or in conjunction with combinatorial modeling. Blake and Trivedi (1989) have used continuous time Markov chains to determine the reliability of shuffle-exchange networks, they use Markov modeling in conjunction with combinatorial modeling by dividing the studied network system into a two level model, obtaining the reliability of each subsystem using Markov modeling and the system reliability using a series system comprised of the Markov components.
Balakrishnan and Reibman (1994), Markov modeling is used to determine the reliability of private networks where the minimal operational path is dictated by the application. The Balakrishnan and Reibman (1994) model presents an instance, where combinatorial analysis is no longer feasible since the reliability models are dependent upon the communication paths. Because the communication paths can take any form, they cannot be accurately represented as series-parallel combinations.
Network reliability analysis using Petri-nets has not been carried out extensively. Most reliability models based on Petri-nets deals on small redundant systems with fixed number of components. The reason for this limited usage of Petri-nets is that only systems that employ replicable building blocks can be easily modeled (Benitez and Fortez, 1992) and the use of Petri-net models for determining the reliability of fault-tolerant processor arrays is also demonstrated.
Wireless network systems inherit the unique handoff characteristics, which leads to different communication structures with various components and links. Therefore, the traditional definition of two-terminal reliability is not seen to be applicable anymore and a new approach is sought to define the reliability metrics in wireless network. A new term, end-to-end expected instantaneous reliability is proposed to integrated the different communication structures into one metric, which includes not only failure parameters but also service parameters (Khandani et al., 2008).
Recently, Yasar (2007) have proposed algorithms for reliability analysis in mobile communication networks. In his study, he observed that the environmental conditions, changing locations of mobile hosts and changing structure in the wireless computer network can easily affect and change the system’s reliability. He extended the two-terminal reliability for wireless computer networks using Markov chains as predictor for the changing locations of the mobile node (s). His algorithms consider four possible access point conditions namely static to static, mobile to static, static to mobile and mobile to mobile.
Problem statement: As observed in the introduction, there has been a shift of concentration from analyzing reliability for wired networks to wireless networks. This in part is due to the chaotic and unpredictable connectivity at low-power in multi-hop sensor networks. Traditionally, this problem can be addressed by determining the degree of error coding redundancy or the expected number of retransmission on a link. In sensor networks, it arises as part of network self-organization. The network comprises a large number of resource constrained nodes communicating via low-power radios, such that each transmission from a node is potentially heard by a small subset of the overall network (Woo and Culler, 2003).
The target of this study is to optimize the arrival percentage of transmitted data at the receiver’s end. This is obtained by maximizing the transmission throughput and minimizing the total delay. To achieve the aim we will model and simulate the transmission link as a function of some transmission parameters such as the transmission rate, transmitted power, distance between the communication nodes and the channel fade state to maintain the link reliability of multi-hop wireless networks. This approach will help to evaluate the relationship that exists between link connectivity parameters and how they can be optimized to improve reception reliability.
Network reliability modeling: Network reliability refers to the reliability of the overall network to provide communication in the event of a failure of a component in the network and it depends on sustainability of both hardware and software. Traditionally, failures were primarily due to hardware malfunctions and thus, the emphasis was on element-level network availability. In current networks, most failures are due to fibre cable cuts, software causes and malicious attacks (Medhi and Tipper, 2000). Such failures can drop a significant number of existing connections. Thus, the network should have the ability, with low latency to detect and isolate a fault and reconnect the affected connections. In some cases, depending on the nature of the failure, the network may not have enough capacity to handle a major simultaneous reconnect phase (Medhi, 2008). In the following analysis, we address network reliability with respect to connectivity and performability.
Network connectivity: Network connectivity is the availability of a path from a source node to a destination. The assumption about dual homing and Connectivity (C) of at least two for all nodes imply that there is no cutset of size 1 (c-1) or no single point of failure. A cutset is defined as a set of node S⊆N, for which N-S is a failed state. That is the network is not all-terminal connected in this state. For such a network, a lower bound on reliability may be computed following Ball et al. (1995). Let us assume that every set of element of size C is a cutset. For an m-element system, the system reliability, R (P), is equivalent to the reliability of a K of N system with K = m-c + 1 and N = m and is given by Eq. 1:
![]() |
(1) |
where:
P | = | Reliability of a network node |
C | = | Connectivity of a network node |
Alternatively, if P is close to unity, the all-terminal reliability can be approximated by:
![]() |
(2) |
where:
Cc | = | The number of cutsets of size c in the network |
For instance, using Eq. (1), if c = 2, p = 0.9999 and m = 8, then R (P) is 0.9999997, that is the probability of all the end points in the system being connected is almost unity. It should be noted that this is a lower bound and assumes that every pair of nodes constitutes a cutset resulting in loss of connectivity, which obviously is pessimistic since only a subset of node pairs are cutsets. Using Eq. 2, if c = 2, p = 0.9999, m = 8 and Cc = 4, then, R (P) is 1, i.e., the probability of all the end points in the system being connected is 1.
Network performability: Network performability is the ability of the network, in the presence of failures, to preserve existing connection (no dropped calls) and to dictate new connections (no blocked calls) (Ball et al., 1995). Other aspects of performability include the loss of capacity in the presence of node failures and performance characteristics such as delay in the presence of node failures.
In telecommunication networks, lost calls are measured in terms of calls per million calls. Usually, this metric is calculated over a 1 year interval and is expressed in Defect Rate per Million (DRM). Calls may be lost due to the following failure events in the network: total switch failure and partial switch failure.
The loss of existing connection due to total switch failures in DRM, Lt is given by:
![]() |
(3) |
Loss of existing connections due to partial switch failures in DRM, LP is given by:
![]() |
(4) |
where, C = 0 for a case of no coverage (for software faults).
Therefore, the total lost calls in DRM, L is given as Eq. 5:
![]() |
(5) |
where:
λt | = | Total failure events per year |
λp | = | Software failures per connections setup for a core switch |
C | = | Coverage factor for recovery from software failures |
Cs | = | Maximum simultaneous connections per switch |
Cy | = | Calls handled per switch per year |
MATERIALS AND METHODS
Multihop wireless network: This is a cooperative network, where data streams may be transmitted over multiple wireless hops to reach the destination. The network link structure depends on the transmission radius of the nodes and can be adjusted by varying the transmission power (Zafer and Modiano, 2006). A smaller transmission radius of the nodes causes less interference at each hop but the calls have to hop through many nodes to reach the destination. As the same call is served by many nodes along the route, multi-hopping increases the internal load in the network. In contrast, a larger transmission radius reduces the number of hops of a call but increases the interference constraints at each hop.
In a line network, larger transmission radius reduces the blocking probability of calls, whereas, in a grid network with an underlying denser node topology, it is more desirable to use a smaller transmission radius. This suggests that for sparse , networks the increase in the internal load due to multi-hopping contributes significantly to call blocking whereas for denser networks, the increase in the interfering neighboring nodes due to a larger radius is a significant limiting factor (Gupta and Kumar, 2000; Grossglauser and Tse, 2002).
Probabilistic metrics: Probabilistic reliability metrics require the concept of a probabilistic graph. A probabilistic graph is an undirected graph, where each node has an associated probability of being in an operational state and likewise for each edge. In probabilistic reliability analysis, networks under stress are modeled as probabilistic graphs (Weichenberg, 2003). Almost all approaches to probabilistic reliability analysis have focused on the probability that a subset of nodes in a network is connected when links are very reliable. Thus, all-terminal reliability of a probabilistic graph can be defined as the probability that any two nodes in the graph have an operating path connecting them.
If links fail in a statistically independent fashion with probability P, then the all-terminal probability Pc (G, P) is given by Eq. 6:
![]() |
(6) |
where:
Ai | = | Denotes the number of connected subgraphs with i edges |
Ci | = | Denotes the number of edge cutsets of cardinality i |
Probabilistic link model: The received signal is modeled as Eq. 7:
Y = ax + η
|
(7) |
where:
x | = | The transmitted signal |
η | = | The additive noise received |
a | = | The signal attenuation due to propagation in the wireless point-to-point link |
Y | = | The received signal |
We assume that the received noise, η is zero mean additive white Gaussian noise with average power of σn2. In general, the attenuation, a depends on the distance between the communicating nodes and the fade state of the channel. Let d represent the distance between communication nodes and f the fading state of the channel. Therefore, a can be expressed as a function of these two parameters.
![]() |
(8) |
In systems with mobile nodes and a constantly changing propagation environment, both f and d vary over time. Assuming that in the system, f and d remain constant for a long period of time compared to a typical transmission block length and that the transmission blocks are long enough that coding can be alone to average over the Gaussian noise. Given these assumptions, the link between two nodes is a single Additive White Gaussian Noise (AWGN) channel and the amount of information (capacity) that can be reliably transmitted through this channel (Rappaport, 2003) is given by Eq. 9:
![]() |
(9) |
Simplifying this notation by decomposing a (f, d) into two independent components corresponding to small scale fading and large scale path-loss (Proakis, 2001), we have:
![]() |
(10) |
where, k is the propagation power loss exponent, usually assumed to be between 2 and 4. Simplifying Eq. 9 and setting:
![]() |
we obtain:
![]() |
(11) |
Reliability at the network layer: A multi-hop route is a sequence of nodes through, which the information is relayed from a source node, s, to a destination node, d, i.e.:
Route = (ro, r1, …, rh-1, rh)
|
where:
r0 | = | s |
rh | = | d |
h | = | the number of hops |
We assume the network operates based on a time division protocol under, which successive transmissions along a route happen in consecutive slots. The end-to-end reliability is defined as the probability of successful end-to-end transmission. We also assume that the fading factors for different links are iid Rayleigh random variables. Based on these assumptions, the reliability for a Rayleigh fading link (Khandani et al., 2008) with fixed distance is expressed as Eq. 12:
![]() |
(12) |
We then obtain the end-to-end reliability as Eq. 13:
![]() |
(13) |
Following derivations by Khandani et al. (2008), we obtain the resulting end-to-end reliability for an optimal power allocation is Eq. 14:
![]() |
(14) |
where, SNRTotal-Max is a fixed end-to-end power
RESULTS AND DISCUSSION
For this research, the probabilistic link model for the end-to-end reliability and the optimum reliability is
simulated. Results of the simulation are written to text files for rapid compilation. The simulation is carried out over a broad spectrum of varying parameters. Results obtained are analyzed and presented below. The model equations simulated can be shown in Eq. 13 and 14. For Eq. 13, we simulate the end-to-end reliability and establish a relationship between the end-to-end reliability and distance as well as the SNR. We then substitute
![]() |
into Eq. 13 and relate the end-to-end reliability with x- the transmit power. For Eq. 14, we simulate the optimal end-to-end link reliability, Roptimal and relate this parameter with distance. The simulation program is developed in visual basic; an object oriented programming language suitable for scientific simulations. The program is dynamic and is generically programmed for adaptability.
Since, the reliability of a link, i.e., the probability of correct reception of signal at the receiver’s end could be modeled as a function of transmit power, distance between communicating nodes and Signal-to-Noise Ratio (SNR), a probabilistic model have been designed. This model establishes a relationship between the link reliability, the transmit power, distance between communicating nodes and signal-to-noise ratio. We simulate the model and show the results in the form of graphs. The graphs are analyzed and interpreted below. The interpretations will help to predict the actual relationship between the link reliability and multi-hop propagation parameters.
Shown in Fig. 1, is a plot of the end-to-end reliability as a function of distance
![]() |
As expected, the end-to-end reliability decreases monotonically with distance due to fading. This shows a poor signal reception at the receiver end. The result also shows that mobile nodes cannot transmit with fixed power over a distance due to attenuation. Hence, the need to improve the system performance is obvious. An exponential trend equation fitted in the graph predicts new empirical results.
Figure 2 shows the end-to-end reliability performance with respect to the mobile transmit power. It is observed that reliability increases monotonically as the transmit power per link increases. This connotes that for any fixed route, efficient power allocation schemes results in different end-to-end reliability and consumed power.
![]() |
|
Fig. 1: | A plot of end-to-end reliability vs. distance |
![]() |
|
Fig. 2: | A plot of end-to-end reliability vs. transmit power |
This graph has a logarithmic curve and a logarithmic trend equation is fitted in the graph to enable the reader predict new empirical results.
A plot of reliability versus SNR is shown in Fig. 3. It is also observed from this figure that signal reception improves at the receiver’s end of the mobile link. This is due to a reduction in attenuation as the transmission power increases.
![]() |
|
Fig. 3: | A plot of end-to-end reliability vs. SNR |
Figure 1 shows the performance of the system evaluated as a function of reliability and distance. Here, the transmission of the mobile power link is appropriately adjusted to guarantee quality service between the mobile links. It can be seen clearly from the graph that the end-to-end reliability increases to a maximum level fixed at approximately 2 km, before decreasing. This type of performance gives us an insight that there is always an optimal power allocation subject to communication range that maximizes and guarantees end-to-end reliability, beyond which, the system performance degrades.
CONCLUSION
The problems associated with link reliability in multi-hop wireless networks were studied. Using a probability distribution, an end-to-end link reliability model was derived for wireless channels. Also, algorithms were developed for finding the optimum reliability between a source-destination pair of nodes under reliability-power constraints and distance. Using simulation, we compared the system performance (link reliability) with respect to distance, transmit power and signal-to-noise ratio where the idea of adjusting transmission power was introduced as a way to improve the end-to-end reliability by guaranteeing Quality of Service (QoS).
The models proposed in this study could serve as a catalyst for a new area of research in the area of link reliability. Also, consistent and approximate models, which strike a good balance between simplicity and applicability needs to be developed and tested. These models should possess intuitive inputs, which are readily available to network designers.
From the results obtained, it is evident that increase in distance between communicating nodes affect signal reception adversely and transmit power is proportional to the end-to-end reliability. The idea of adjusting transmission power has helped immensely in maximizing transmission throughput thereby increasing the end-to-end reliability.
Moses E. Ekpenyong and Joseph Isabona. Probabilistic Link Reliability Model for Wireless Communication Networks.
DOI: https://doi.org/10.36478/ijssceapp.2009.22.29
URL: https://www.makhillpublications.co/view-article/1997-5422/ijssceapp.2009.22.29