993 resultados para Technical loss


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern cellular channels in 3G networks incorporate sophisticated power control and dynamic rate adaptation which can have significant impact on adaptive transport layer protocols, such as TCP. Though there exists studies that have evaluated the performance of TCP over such networks, they are based solely on observations at the transport layer and hence have no visibility into the impact of lower layer dynamics, which are a key characteristic of these networks. In this work, we present a detailed characterization of TCP behavior based on cross-layer measurement of transport layer, as well as RF and MAC layer parameters. In particular, through a series of active TCP/UDP experiments and measurement of the relevant variables at all three layers, we characterize both, the wireless scheduler and the radio link protocol in a commercial CDMA2000 network and assess their impact on TCP dynamics. Somewhat surprisingly, our findings indicate that the wireless scheduler is mostly insensitive to channel quality and sector load over short timescales and is mainly affected by the transport layer data rate. Furthermore, with the help of a robust correlation measure, Normalized Mutual Information, we were able to quantify the impact of the wireless scheduler and the radio link protocol on various TCP parameters such as the round trip time, throughput and packet loss rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Emerging configurable infrastructures such as large-scale overlays and grids, distributed testbeds, and sensor networks comprise diverse sets of available computing resources (e.g., CPU and OS capabilities and memory constraints) and network conditions (e.g., link delay, bandwidth, loss rate, and jitter) whose characteristics are both complex and time-varying. At the same time, distributed applications to be deployed on these infrastructures exhibit increasingly complex constraints and requirements on resources they wish to utilize. Examples include selecting nodes and links to schedule an overlay multicast file transfer across the Grid, or embedding a network experiment with specific resource constraints in a distributed testbed such as PlanetLab. Thus, a common problem facing the efficient deployment of distributed applications on these infrastructures is that of "mapping" application-level requirements onto the network in such a manner that the requirements of the application are realized, assuming that the underlying characteristics of the network are known. We refer to this problem as the network embedding problem. In this paper, we propose a new approach to tackle this combinatorially-hard problem. Thanks to a number of heuristics, our approach greatly improves performance and scalability over previously existing techniques. It does so by pruning large portions of the search space without overlooking any valid embedding. We present a construction that allows a compact representation of candidate embeddings, which is maintained by carefully controlling the order via which candidate mappings are inserted and invalid mappings are removed. We present an implementation of our proposed technique, which we call NETEMBED – a service that identify feasible mappings of a virtual network configuration (the query network) to an existing real infrastructure or testbed (the hosting network). We present results of extensive performance evaluation experiments of NETEMBED using several combinations of real and synthetic network topologies. Our results show that our NETEMBED service is quite effective in identifying one (or all) possible embeddings for quite sizable queries and hosting networks – much larger than what any of the existing techniques or services are able to handle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The second-order statistics of neural activity was examined in a model of the cat LGN and V1 during free-viewing of natural images. In the model, the specific patterns of thalamocortical activity required for a Bebbian maturation of direction-selective cells in VI were found during the periods of visual fixation, when small eye movements occurred, but not when natural images were examined in the absence of fixational eye movements. In addition, simulations of stroboscopic reming that replicated the abnormal pattern of eye movements observed in kittens chronically exposed to stroboscopic illumination produced results consistent with the reported loss of direction selectivity and preservation of orientation selectivity. These results suggest the involvement of the oculomotor activity of visual fixation in the maturation of cortical direction selectivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Training data for supervised learning neural networks can be clustered such that the input/output pairs in each cluster are redundant. Redundant training data can adversely affect training time. In this paper we apply two clustering algorithms, ART2 -A and the Generalized Equality Classifier, to identify training data clusters and thus reduce the training data and training time. The approach is demonstrated for a high dimensional nonlinear continuous time mapping. The demonstration shows six-fold decrease in training time at little or no loss of accuracy in the handling of evaluation data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes two neural network modules that form part of an emerging theory of how adaptive control of goal-directed sensory-motor skills is achieved by humans and other animals. The Vector-Integration-To-Endpoint (VITE) model suggests how synchronous multi-joint trajectories are generated and performed at variable speeds. The Factorization-of-LEngth-and-TEnsion (FLETE) model suggests how outflow movement commands from a VITE model may be performed at variable force levels without a loss of positional accuracy. The invariance of positional control under speed and force rescaling sheds new light upon a familiar strategy of motor skill development: Skill learning begins with performance at low speed and low limb compliance and proceeds to higher speeds and compliances. The VITE model helps to explain many neural and behavioral data about trajectory formation, including data about neural coding within the posterior parietal cortex, motor cortex, and globus pallidus, and behavioral properties such as Woodworth's Law, Fitts Law, peak acceleration as a function of movement amplitude and duration, isotonic arm movement properties before and after arm-deafferentation, central error correction properties of isometric contractions, motor priming without overt action, velocity amplification during target switching, velocity profile invariance across different movement distances, changes in velocity profile asymmetry across different movement durations, staggered onset times for controlling linear trajectories with synchronous offset times, changes in the ratio of maximum to average velocity during discrete versus serial movements, and shared properties of arm and speech articulator movements. The FLETE model provides new insights into how spina-muscular circuits process variable forces without a loss of positional control. These results explicate the size principle of motor neuron recruitment, descending co-contractive compliance signals, Renshaw cells, Ia interneurons, fast automatic reactive control by ascending feedback from muscle spindles, slow adaptive predictive control via cerebellar learning using muscle spindle error signals to train adaptive movement gains, fractured somatotopy in the opponent organization of cerebellar learning, adaptive compensation for variable moment-arms, and force feedback from Golgi tendon organs. More generally, the models provide a computational rationale for the use of nonspecific control signals in volitional control, or "acts of will", and of efference copies and opponent processing in both reactive and adaptive motor control tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article introduces ART 2-A, an efficient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architecture, but at a speed two to three orders of magnitude faster. Analysis and simulations show how the ART 2-A systems correspond to ART 2 dynamics at both the fast-learn limit and at intermediate learning rates. Intermediate learning rates permit fast commitment of category nodes but slow recoding, analogous to properties of word frequency effects, encoding specificity effects, and episodic memory. Better noise tolerance is hereby achieved without a loss of learning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes practical the use of ART 2 modules in large-scale neural computation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The International Energy Agency has repeatedly identified increased end-use energy efficiency as the quickest, least costly method of green house gas mitigation, most recently in the 2012 World Energy Outlook, and urges all governing bodies to increase efforts to promote energy efficiency policies and technologies. The residential sector is recognised as a major potential source of cost effective energy efficiency gains. Within the EU this relative importance can be seen from a review of the National Energy Efficiency Action Plans (NEEAP) submitted by member states, which in all cases place a large emphasis on the residential sector. This is particularly true for Ireland whose residential sector has historically had higher energy consumption and CO2 emissions than the EU average and whose first NEEAP targeted 44% of the energy savings to be achieved in 2020 from this sector. This thesis develops a bottom-up engineering archetype modelling approach to analyse the Irish residential sector and to estimate the technical energy savings potential of a number of policy measures. First, a model of space and water heating energy demand for new dwellings is built and used to estimate the technical energy savings potential due to the introduction of the 2008 and 2010 changes to part L of the building regulations governing energy efficiency in new dwellings. Next, the author makes use of a valuable new dataset of Building Energy Rating (BER) survey results to first characterise the highly heterogeneous stock of existing dwellings, and then to estimate the technical energy savings potential of an ambitious national retrofit programme targeting up to 1 million residential dwellings. This thesis also presents work carried out by the author as part of a collaboration to produce a bottom-up, multi-sector LEAP model for Ireland. Overall this work highlights the challenges faced in successfully implementing both sets of policy measures. It points to the wide potential range of final savings possible from particular policy measures and the resulting high degree of uncertainty as to whether particular targets will be met and identifies the key factors on which the success of these policies will depend. It makes recommendations on further modelling work and on the improvements necessary in the data available to researchers and policy makers alike in order to develop increasingly sophisticated residential energy demand models and better inform policy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the implications of the effectuation concept for socio-technical artifact design as part of the design science research (DSR) process in information systems (IS). Effectuation logic is the opposite of causal logic. Ef-fectuation does not focus on causes to achieve a particular effect, but on the possibilities that can be achieved with extant means and resources. Viewing so-cio-technical IS DSR through an effectuation lens highlights the possibility to design the future even without set goals. We suggest that effectuation may be a useful perspective for design in dynamic social contexts leading to a more dif-ferentiated view on the instantiation of mid-range artifacts for specific local ap-plication contexts. Design science researchers can draw on this paper’s conclu-sions to view their DSR projects through a fresh lens and to reexamine their re-search design and execution. The paper also offers avenues for future research to develop more concrete application possibilities of effectuation in socio-technical IS DSR and, thus, enrich the discourse.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The recent implementation of Universal Neonatal Hearing Screening (UNHS) in all 19 maternity hospitals across Ireland has precipitated early identification of paediatric hearing loss in an Irish context. This qualitative, grounded theory study centres on the issue of parental coping as families receive and respond to (what is typically) an unexpected diagnosis of hearing loss in their newborn baby. Parental wellbeing is of particular concern as the diagnosis occurs in the context of recovery from birth and at a time when the parent-child relationship is being established. As the vast majority of children with a hearing loss are born into hearing families with no prior history of deafness, parents generally have had little exposure to childhood hearing loss and often experience acute emotional vulnerability as they respond to the diagnosis. The researcher conducted in-depth interviews primarily with parents (and to a lesser extent with professionals), as well as a follow-up postal questionnaire for parents. Through a grounded theory analysis of data, the researcher subsequently fashioned a four-stage model depicting the parental journey of receiving and coping with a diagnosis. The four stages (entitled Anticipating, Confirming, Adjusting and Normalising) are differentiated by the chronology of service intervention and defined by the overarching parental experience. Far from representing a homogenous trajectory, this four-stage model is multifaceted and captures a wide diversity of parental experiences ranging from acute distress to resilient hopefulness

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIMS: To assess the impact of involuntary job loss due to plant closure or layoff on relapse to smoking and smoking intensity among older workers. DESIGN, PARTICIPANTS, SAMPLE: Data come from the Health and Retirement Study, a nationally representative survey of older Americans aged 51-61 in 1991 followed every 2 years beginning in 1992. The 3052 participants who were working at the initial wave and had any history of smoking comprise the main sample. METHODS: Primary outcomes are smoking relapse at wave 2 (1994) among baseline former smokers, and smoking quantity at wave 2 among baseline current smokers. As reported at the wave 2 follow-up, 6.8% of the sample experienced an involuntary job loss between waves 1 and 2. FINDINGS: Older workers have over two times greater odds of relapse subsequent to involuntary job loss than those who did not. Further, those who were current smokers prior to displacement that did not obtain new employment were found to be smoking more cigarettes, on average, post-job loss. CONCLUSIONS: The stress of job loss, along with other significant changes associated with leaving one's job, which would tend to increase cigarette consumption, must outweigh the financial hardship which would tend to reduce consumption. This highlights job loss as an important health risk factor for older smokers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The nutrient-sensing Tor pathway governs cell growth and is conserved in nearly all eukaryotic organisms from unicellular yeasts to multicellular organisms, including humans. Tor is the target of the immunosuppressive drug rapamycin, which in complex with the prolyl isomerase FKBP12 inhibits Tor functions. Rapamycin is a gold standard drug for organ transplant recipients that was approved by the FDA in 1999 and is finding additional clinical indications as a chemotherapeutic and antiproliferative agent. Capitalizing on the plethora of recently sequenced genomes we have conducted comparative genomic studies to annotate the Tor pathway throughout the fungal kingdom and related unicellular opisthokonts, including Monosiga brevicollis, Salpingoeca rosetta, and Capsaspora owczarzaki. RESULTS: Interestingly, the Tor signaling cascade is absent in three microsporidian species with available genome sequences, the only known instance of a eukaryotic group lacking this conserved pathway. The microsporidia are obligate intracellular pathogens with highly reduced genomes, and we hypothesize that they lost the Tor pathway as they adapted and streamlined their genomes for intracellular growth in a nutrient-rich environment. Two TOR paralogs are present in several fungal species as a result of either a whole genome duplication or independent gene/segmental duplication events. One such event was identified in the amphibian pathogen Batrachochytrium dendrobatidis, a chytrid responsible for worldwide global amphibian declines and extinctions. CONCLUSIONS: The repeated independent duplications of the TOR gene in the fungal kingdom might reflect selective pressure acting upon this kinase that populates two proteinaceous complexes with different cellular roles. These comparative genomic analyses illustrate the evolutionary trajectory of a central nutrient-sensing cascade that enables diverse eukaryotic organisms to respond to their natural environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Blochmannia are obligately intracellular bacterial mutualists of ants of the tribe Camponotini. Blochmannia perform key nutritional functions for the host, including synthesis of several essential amino acids. We used Illumina technology to sequence the genome of Blochmannia associated with Camponotus vafer. RESULTS: Although Blochmannia vafer retains many nutritional functions, it is missing glutamine synthetase (glnA), a component of the nitrogen recycling pathway encoded by the previously sequenced B. floridanus and B. pennsylvanicus. With the exception of Ureaplasma, B. vafer is the only sequenced bacterium to date that encodes urease but lacks the ability to assimilate ammonia into glutamine or glutamate. Loss of glnA occurred in a deletion hotspot near the putative replication origin. Overall, compared to the likely gene set of their common ancestor, 31 genes are missing or eroded in B. vafer, compared to 28 in B. floridanus and four in B. pennsylvanicus. Three genes (queA, visC and yggS) show convergent loss or erosion, suggesting relaxed selection for their functions. Eight B. vafer genes contain frameshifts in homopolymeric tracts that may be corrected by transcriptional slippage. Two of these encode DNA replication proteins: dnaX, which we infer is also frameshifted in B. floridanus, and dnaG. CONCLUSIONS: Comparing the B. vafer genome with B. pennsylvanicus and B. floridanus refines the core genes shared within the mutualist group, thereby clarifying functions required across ant host species. This third genome also allows us to track gene loss and erosion in a phylogenetic context to more fully understand processes of genome reduction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES: This study compared LDL, HDL, and VLDL subclasses in overweight or obese adults consuming either a reduced carbohydrate (RC) or reduced fat (RF) weight maintenance diet for 9 months following significant weight loss. METHODS: Thirty-five (21 RC; 14 RF) overweight or obese middle-aged adults completed a 1-year weight management clinic. Participants met weekly for the first six months and bi-weekly thereafter. Meetings included instruction for diet, physical activity, and behavior change related to weight management. Additionally, participants followed a liquid very low-energy diet of approximately 2092 kJ per day for the first three months of the study. Subsequently, participants followed a dietary plan for nine months that targeted a reduced percentage of carbohydrate (approximately 20%) or fat (approximately 30%) intake and an energy intake level calculated to maintain weight loss. Lipid subclasses using NMR spectroscopy were analyzed prior to weight loss and at multiple intervals during weight maintenance. RESULTS: Body weight change was not significantly different within or between groups during weight maintenance (p>0.05). The RC group showed significant increases in mean LDL size, large LDL, total HDL, large and small HDL, mean VLDL size, and large VLDL during weight maintenance while the RF group showed increases in total HDL, large and small HDL, total VLDL, and large, medium, and small VLDL (p<0.05). Group*time interactions were significant for large and medium VLDL (p>0.05). CONCLUSION: Some individual lipid subclasses improved in both dietary groups. Large and medium VLDL subclasses increased to a greater extent across weight maintenance in the RF group.