8 resultados para allocation of fixed cost with normal capacity
em Digital Commons at Florida International University
Resumo:
In recent years, wireless communication infrastructures have been widely deployed for both personal and business applications. IEEE 802.11 series Wireless Local Area Network (WLAN) standards attract lots of attention due to their low cost and high data rate. Wireless ad hoc networks which use IEEE 802.11 standards are one of hot spots of recent network research. Designing appropriate Media Access Control (MAC) layer protocols is one of the key issues for wireless ad hoc networks. ^ Existing wireless applications typically use omni-directional antennas. When using an omni-directional antenna, the gain of the antenna in all directions is the same. Due to the nature of the Distributed Coordination Function (DCF) mechanism of IEEE 802.11 standards, only one of the one-hop neighbors can send data at one time. Nodes other than the sender and the receiver must be either in idle or listening state, otherwise collisions could occur. The downside of the omni-directionality of antennas is that the spatial reuse ratio is low and the capacity of the network is considerably limited. ^ It is therefore obvious that the directional antenna has been introduced to improve spatial reutilization. As we know, a directional antenna has the following benefits. It can improve transport capacity by decreasing interference of a directional main lobe. It can increase coverage range due to a higher SINR (Signal Interference to Noise Ratio), i.e., with the same power consumption, better connectivity can be achieved. And the usage of power can be reduced, i.e., for the same coverage, a transmitter can reduce its power consumption. ^ To utilizing the advantages of directional antennas, we propose a relay-enabled MAC protocol. Two relay nodes are chosen to forward data when the channel condition of direct link from the sender to the receiver is poor. The two relay nodes can transfer data at the same time and a pipelined data transmission can be achieved by using directional antennas. The throughput can be improved significant when introducing the relay-enabled MAC protocol. ^ Besides the strong points, directional antennas also have some explicit drawbacks, such as the hidden terminal and deafness problems and the requirements of retaining location information for each node. Therefore, an omni-directional antenna should be used in some situations. The combination use of omni-directional and directional antennas leads to the problem of configuring heterogeneous antennas, i e., given a network topology and a traffic pattern, we need to find a tradeoff between using omni-directional and using directional antennas to obtain a better network performance over this configuration. ^ Directly and mathematically establishing the relationship between the network performance and the antenna configurations is extremely difficult, if not intractable. Therefore, in this research, we proposed several clustering-based methods to obtain approximate solutions for heterogeneous antennas configuration problem, which can improve network performance significantly. ^ Our proposed methods consist of two steps. The first step (i.e., clustering links) is to cluster the links into different groups based on the matrix-based system model. After being clustered, the links in the same group have similar neighborhood nodes and will use the same type of antenna. The second step (i.e., labeling links) is to decide the type of antenna for each group. For heterogeneous antennas, some groups of links will use directional antenna and others will adopt omni-directional antenna. Experiments are conducted to compare the proposed methods with existing methods. Experimental results demonstrate that our clustering-based methods can improve the network performance significantly. ^
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
For children with intractable seizures, surgical removal of epileptic foci, if identifiable and feasible, can be an effective way to reduce or eliminate seizures. The success of this type of surgery strongly hinges upon the ability to identify and demarcate those epileptic foci. The ultimate goal of this research project is to develop an effective technology for detection of unique in vivo pathophysiological characteristics of epileptic cortex and, subsequently, to use this technology to guide epilepsy surgery intraoperatively. In this PhD dissertation the feasibility of using optical spectroscopy to identify uniquein vivo pathophysiological characteristics of epileptic cortex was evaluated and proven using the data collected from children undergoing epilepsy surgery. ^ In this first in vivo human study, static diffuse reflectance and fluorescence spectra were measured from the epileptic cortex, defined by intraoperative ECoG, and its surrounding tissue from pediatric patients undergoing epilepsy surgery. When feasible, biopsy samples were taken from the investigated sites for the subsequent histological analysis. Using the histological data as the gold standard, spectral data was analyzed with statistical tools. The results of the analysis show that static diffuse reflectance spectroscopy and its combination with static fluorescence spectroscopy can be used to effectively differentiate between epileptic cortex with histopathological abnormalities and normal cortex in vivo with a high degree of accuracy. ^ To maximize the efficiency of optical spectroscopy in detecting and localizing epileptic cortex intraoperatively, the static system was upgraded to investigate histopathological abnormalities deep within the epileptic cortex, as well as to detect unique temporal pathophysiological characteristics of epileptic cortex. Detection of deep abnormalities within the epileptic cortex prompted a redesign of the fiberoptic probe. A mechanical probe holder was also designed and constructed to maintain the probe contact pressure and contact point during the time dependent measurements. The dynamic diffuse reflectance spectroscopy system was used to characterize in vivo pediatric epileptic cortex. The results of the study show that some unique wavelength dependent temporal characteristics (e.g., multiple horizontal bands in the correlation coefficient map γ(λref = 800 nm, λcomp ,t)) can be found in the time dependent recordings of diffuse reflectance spectra from epileptic cortex defined by ECoG.^
Resumo:
Background During recent years laparoscopic cholecystectomy has dramatically increased, sometimes resulting in overtreatment. Aim of this work was to retrospectively analyze all laparoscopic cholecystectomies performed in a single center in order to find the percentage of patients whose surgical treatment may be explained with this general trend, and to speculate about the possible causes. Methods 831 patients who underwent a laparoscopic cholecystectomy from 1999 to 2008 were retrospectively analyzed. Results At discharge, 43.08% of patients were operated on because of at least one previous episode of biliary colic before the one at admission; 14.08% of patients presented with acute lithiasic cholecystitis; 14.68% were operated on because of an increase in bilirubin level; 1.56% were operated on because of a previous episode of jaundice with normal bilirubin at admission; 0.72% had gallbladder adenomas, 0.72% had cholangitis, 0.36% had biliodigestive fistula and one patient (0.12%) had acalculous cholecystitis. By excluding all these patients, 21.18% were operated on without indications. Conclusions The broadening of indications for laparoscopic cholecystectomy is undisputed and can be considered a consequence of new technologies that have been introduced, increased demand from patients, and the need for practice by inexperienced surgeons. If not prevented, this trend could continue indefinitely.
Resumo:
Vegetation patterns of mangroves in the Florida Coastal Everglades (FCE) result from the interaction of environmental gradients and natural disturbances (i.e., hurricanes), creating an array of distinct riverine and scrub mangroves across the landscape. We investigated how landscape patterns of biomass and total net primary productivity (NPPT), including allocation in above- and below-ground mangrove components, vary inter-annually (2001–2004) across gradients in soil properties and hydroperiod in two distinct FCE basins: Shark River Estuary and Taylor River Slough. We propose that the allocation of belowground biomass and productivity (NPPB) relative to aboveground allocation is greater in regions with P limitation and permanent flooding. Porewater sulfide was significantly higher in Taylor River (1.2 ± 0.3 mM) compared to Shark River (0.1 ± 0.03 mM) indicating the lack of a tidal signature and more permanent flooding in this basin. There was a decrease in soil P density and corresponding increase in soil N:P from the mouth (28) to upstream locations (46–105) in Shark River that was consistent with previous results in this region. Taylor River sites showed the highest P limitation (soil N:P > 60). Average NPPT was double in higher P environments (17.0 ± 1.1 Mg ha−1 yr−1) compared to lower P regions (8.3 ± 0.3 Mg ha−1 yr−1). Root biomass to aboveground wood biomass (BGB:AWB) ratio was 17 times higher in P-limited environments demonstrating the allocation strategies of mangroves under resource limitation. Riverine mangroves allocated most of the NPPT to aboveground (69%) while scrub mangroves showed the highest allocation to belowground (58%). The total production to biomass (P:B) ratios were lower in Shark River sites (0.11 yr−1); whereas in Taylor River sites P:B ratios were higher and more variable (0.13–0.24 yr−1). Our results suggest that the interaction of lower P availability in Taylor River relative to Shark River basin, along with higher sulfide and permanent flooding account for higher allocation of belowground biomass and production, at expenses of aboveground growth and wood biomass. These distinct patterns of carbon partitioning between riverine and scrub mangroves in response to environmental stress support our hypothesis that belowground allocation is a significant contribution to soil carbon storage in forested wetlands across FCE, particularly in P-limited scrub mangroves. Elucidating these biomass strategies will improve analysis of carbon budgets (storage and production) in neotropical mangroves and understanding what conditions lead to net carbon sinks in the tropical coastal zone.
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
The purpose of this study was to analyze the network performance by observing the effect of varying network size and data link rate on one of the most commonly found network configurations. Computer networks have been growing explosively. Networking is used in every aspect of business, including advertising, production, shipping, planning, billing, and accounting. Communication takes place through networks that form the basis of transfer of information. The number and type of components may vary from network to network depending on several factors such as requirement and actual physical placement of the networks. There is no fixed size of the networks and they can be very small consisting of say five to six nodes or very large consisting of over two thousand nodes. The varying network sizes make it very important to study the network performance so as to be able to predict the functioning and the suitability of the network. The findings demonstrated that the network performance parameters such as global delay, load, router processor utilization, router processor delay, etc. are affected. The findings demonstrated that the network performance parameters such as global delay, load, router processor utilization, router processor delay, etc. are affected significantly due to the increase in the size of the network and that there exists a correlation between the various parameters and the size of the network. These variations are not only dependent on the magnitude of the change in the actual physical area of the network but also on the data link rate used to connect the various components of the network.
Resumo:
Suppose two or more variables are jointly normally distributed. If there is a common relationship between these variables it would be very important to quantify this relationship by a parameter called the correlation coefficient which measures its strength, and the use of it can develop an equation for predicting, and ultimately draw testable conclusion about the parent population. This research focused on the correlation coefficient ρ for the bivariate and trivariate normal distribution when equal variances and equal covariances are considered. Particularly, we derived the maximum Likelihood Estimators (MLE) of the distribution parameters assuming all of them are unknown, and we studied the properties and asymptotic distribution of . Showing this asymptotic normality, we were able to construct confidence intervals of the correlation coefficient ρ and test hypothesis about ρ. With a series of simulations, the performance of our new estimators were studied and were compared with those estimators that already exist in the literature. The results indicated that the MLE has a better or similar performance than the others.