957 resultados para Adaptive Expandable Data-Pump
Resumo:
Tulevaisuudessa siirrettävät laitteet, kuten matkapuhelimet ja kämmenmikrot, pystyvät muodostamaan verkkoyhteyden käyttäen erilaisia yhteysmenetelmiä eri tilanteissa. Yhteysmenetelmillä on toisistaan poikkeavat viestintäominaisuudet mm. latenssin, kaistanleveyden, virhemäärän yms. suhteen. Langattomille yhteysmenetelmille on myös ominaista tietoliikenneyhteyden ominaisuuksien voimakas muuttuminen ympäristön suhteen. Parhaan suorituskyvyn ja käytettävyyden saavuttamiseksi, on siirrettävän laitteen pystyttävä mukautumaan käytettyyn viestintämenetelmään ja viestintäympäristössä tapahtuviin muutoksiin. Olennainen osa tietoliikenteessä ovat protokollapinot, jotka mahdollistavat tietoliikenneyhteyden järjestelmien välillä tarjoten verkkopalveluita päätelaitteen käyttäjäsovelluksille. Jotta protokollapinot pystyisivät mukautumaan tietyn viestintäympäristön ominaisuuksiin, on protokollapinon käyttäytymistä pystyttävä muuttamaan ajonaikaisesti. Perinteisesti protokollapinot ovat kuitenkin rakennettu muuttumattomiksi niin, että mukautuminen tässä laajuudessa on erittäin vaikeaa toteuttaa, ellei jopa mahdotonta. Tämä diplomityö käsittelee mukautuvien protokollapinojen rakentamista käyttäen komponenttipohjaista ohjelmistokehystä joka mahdollistaa protokollapinojen ajonaikaisen muuttamisen. Toteuttamalla esimerkkijärjestelmän, ja mittaamalla sen suorituskykyä vaihtelevassa tietoliikenneympäristössä, osoitamme, että mukautuvat protokollapinot ovat mahdollisia rakentaa ja ne tarjoavat merkittäviä etuja erityisesti tulevaisuuden siirrettävissä laitteissa.
Resumo:
Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.
Resumo:
Laboratory and field experiments have demonstrated in many cases that malaria vectors do not feed randomly, but show important preferences either for infected or non-infected hosts. These preferences are likely in part shaped by the costs imposed by the parasites on both their vertebrate and dipteran hosts. However, the effect of changes in vector behaviour on actual parasite transmission remains a debated issue. We used the natural associations between a malaria-like parasite Polychromophilus murinus, the bat fly Nycteribia kolenatii and a vertebrate host the Daubenton's bat Myotis daubentonii to test the vector's feeding preference based on the host's infection status using two different approaches: 1) controlled behavioural assays in the laboratory where bat flies could choose between a pair of hosts; 2) natural bat fly abundance data from wild-caught bats, serving as an approximation of realised feeding preference of the bat flies. Hosts with the fewest infectious stages of the parasite were most attractive to the bat flies that did switch in the behavioural assay. In line with the hypothesis of costs imposed by parasites on their vectors, bat flies carrying parasites had higher mortality. However, in wild populations, bat flies were found feeding more based on the bat's body condition, rather than its infection level. Though the absolute frequency of host switches performed by the bat flies during the assays was low, in the context of potential parasite transmission they were extremely high. The decreased survival of infected bat flies suggests that the preference for less infected hosts is an adaptive trait. Nonetheless, other ecological processes ultimately determine the vector's biting rate and thus transmission. Inherent vector preferences therefore play only a marginal role in parasite transmission in the field. The ecological processes rather than preferences per se need to be identified for successful epidemiological predictions.
Resumo:
Clines in chromosomal inversion polymorphisms-presumably driven by climatic gradients-are common but there is surprisingly little evidence for selection acting on them. Here we address this long-standing issue in Drosophila melanogaster by using diagnostic single nucleotide polymorphism (SNP) markers to estimate inversion frequencies from 28 whole-genome Pool-seq samples collected from 10 populations along the North American east coast. Inversions In(3L)P, In(3R)Mo, and In(3R)Payne showed clear latitudinal clines, and for In(2L)t, In(2R)NS, and In(3R)Payne the steepness of the clinal slopes changed between summer and fall. Consistent with an effect of seasonality on inversion frequencies, we detected small but stable seasonal fluctuations of In(2R)NS and In(3R)Payne in a temperate Pennsylvanian population over 4 years. In support of spatially varying selection, we observed that the cline in In(3R)Payne has remained stable for >40 years and that the frequencies of In(2L)t and In(3R)Payne are strongly correlated with climatic factors that vary latitudinally, independent of population structure. To test whether these patterns are adaptive, we compared the amount of genetic differentiation of inversions versus neutral SNPs and found that the clines in In(2L)t and In(3R)Payne are maintained nonneutrally and independent of admixture. We also identified numerous clinal inversion-associated SNPs, many of which exhibit parallel differentiation along the Australian cline and reside in genes known to affect fitness-related traits. Together, our results provide strong evidence that inversion clines are maintained by spatially-and perhaps also temporally-varying selection. We interpret our data in light of current hypotheses about how inversions are established and maintained.
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
Decreasing bone mass during aging predisposes to fractures and it is estimated that every second woman and one in five men will suffer osteoporotic fractures during their lifetime. Bone is an adaptive tissue undergoing continuous remodeling in response to physical and metabolic stimuli. Bone mass decreases through a net negative balance in the bone remodeling process of bone, in which the new bone incompletely replaces the resorbed bone mass. Bone resorption is carried out by the osteoclasts; the bone mineral is solubilized by acidification and the organic matrix is subsequently degraded by proteases. Several classes of drugs are available for prevention of osteoporotic fractures. They act by different mechanisms to increase bone mass, and some of them act mainly as antiresorptives by inhibition of osteoclast formation or their function. Optimally, a drug should act selectively on a specific process, since other processes affected usually result in adverse effects. The purpose of this study was to evaluate whether the osteoclastic vacuolar adenosine trisphosphatases (V-ATPase), which drives the solubilization of bone mineral, can be selectively inhibited despite its ubiquitous cellular functions. The V-ATPase is a multimeric protein composed of 13 subunits of which six possesses two or more isoforms. Selectivity for the osteoclastic V-ATPase could be provided if it has some structural uniqueness, such as a unique isoform combination. The a3 isoform of the 116kDa subunit is inevitable for bone resorption; however, it is also present in, and mainly limited to, the lysosomes of other cells. No evidence of a structural uniqueness of the osteoclastic V-ATPase compared to the lysosomal V-ATPase was found, although this can not yet be excluded. Thus, an inhibitor selective for the a3 isoform would target the lysosomal V-ATPase as well. However, the results suggest that selectivity for bone resorption over lysosomal function can be obtained by two other mechanisms, suggesting that isoform a3 is a valid target. The first is differential compensation; bone resorption depends on the high level of a3 expression, and is not compensated for by other isoforms, while the lower level of a3 in lysosomes of other cells may be partly compensated for. The second mechanism is because the bone resorption process itself is fundamentally different from lysosomal acidification because of the chemistry of bone dissolution and the anatomy of the resorbing osteoclast. By this mechanism, full inhibition of bone resorption is obtained with more than tenfold lower inhibitor concentration than those needed to fully inhibit lysosomal acidification. The two mechanisms are additive. Based on the results, we suggest that bone resorption can be selectively inhibited if VATPase inhibitors that are sufficiently selective for the a3 isoform over the other isoforms are developed.
Resumo:
Approximately a quarter of electrical power consumption in pulp and paper industry is used in different pumping systems. Therefore, improving pumping system efficiency is a considerable way to reduce energy consumption in different processes. Pumping of wood pulp in different consistencies is common in pulp and paper industry. Earlier, centrifugal pumps were used to pump pulp only at low consistencies, but development of MC technology has made it possible to pump medium consistency pulp. Pulp is a non-Newtonian fluid, which flow characteristics are significantly different than what of water. In this thesis is examined the energy efficiency of pumping medium consistency pulp with centrifugal pump. The factors effecting the pumping of MC pulp are presented and through case study is examined the energy efficiency of pumping in practice. With data obtained from the case study are evaluated the effects of pump rotational speed and pulp consistency on energy efficiency. Additionally, losses caused by control valve and validity of affinity laws in pulp pumping are evaluated. The results of this study can be used for demonstrating the energy consumption of MC pumping processes and finding ways to improve energy efficiency in these processes.
Resumo:
Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.
Resumo:
An experimental study was conducted in a pump-turbine model in pumping mode, in order to characterize the flow field structure in the region between stay and guide vanes, using mainly the laser-Doppler anemometry in a two-color and back-scattered light-based system. The structure of the steady and unsteady flow was analyzed. The measurements were performed at three operation points. The obtained data provide appropriate boundary conditions and a good base of validation for numerical codes, and for the understanding of main loss mechanisms of this complex flow.
Resumo:
A quadcopter is a helicopter with four rotors, which is mechanically simple device, but requires complex electrical control for each motor. Control system needs accurate information about quadcopter’s attitude in order to achieve stable flight. The goal of this bachelor’s thesis was to research how this information could be obtained. Literature review revealed that most of the quadcopters, whose source-code is available, use a complementary filter or some derivative of it to fuse data from a gyroscope, an accelerometer and often also a magnetometer. These sensors combined are called an Inertial Measurement Unit. This thesis focuses on calculating angles from each sensor’s data and fusing these with a complementary filter. On the basis of literature review and measurements using a quadcopter, the proposed filter provides sufficiently accurate attitude data for flight control system. However, a simple complementary filter has one significant drawback – it works reliably only when the quadcopter is hovering or moving at a constant speed. The reason is that an accelerometer can’t be used to measure angles accurately if linear acceleration is present. This problem can be fixed using some derivative of a complementary filter like an adaptive complementary filter or a Kalman filter, which are not covered in this thesis.
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.
Resumo:
Traumatic brain injury (TBI) often affects social adaptive functioning and these changes in social adaptability are usually associated with general damage to the frontal cortex. Recent evidence suggests that certain neurons within the orbitofrontal cortex appear to be specialized for the processing of faces and facial expressions. The orbitofrontal cortex also appears to be involved in self-initiated somatic activation to emotionally-charged stimuli. According to Somatic Marker Theory (Damasio, 1994), the reduced physiological activation fails to provide an individual with appropriate somatic cues to personally-relevant stimuli and this, in turn, may result in maladaptive behaviour. Given the susceptibility of the orbitofrontal cortex in TBI, it was hypothesized that impaired perception and reactivity to socially-relevant information might be responsible for some of the social difficulties encountered after TBL Fifteen persons who sustained a moderate to severe brain injury were compared to age and education matched Control participants. In the first study, both groups were presented with photographs of models displaying the major emotions and either asked to identify the emotions or simply view the faces passively. In a second study, participants were asked to select cards from decks that varied in terms of how much money could be won or lost. Those decks with higher losses were considered to be high-risk decks. Electrodermal activity was measured concurrently in both situations. Relative to Controls, TBI participants were found to have difficulty identifying expressions of surprise, sadness, anger, and fear. TBI persons were also found to be under-reactive, as measured by electrodermal activity, while passively viewing slides of negative expressions. No group difference,in reactivity to high-risk card decks was observed. The ability to identify emotions in the face and electrodermal reactivity to faces and to high-risk decks in the card game were examined in relationship to social monitoring and empathy as described by family members or friends on the Brock Adaptive Functioning Questionnaire (BAFQ). Difficulties identifying negative expressions (i.e., sadness, anger, fear, and disgust) predicted problems in monitoring social situations. As well, a modest relationship was observed between hypo-arousal to negative faces and problems with social monitoring. Finally, hypo-arousal in the anticipation of risk during the card game related to problems in empathy. In summary, these data are consistent with the view that alterations in the ability to perceive emotional expressions in the face and the disruption in arousal to personally-relevant information may be accounting for some of the difficulties in social adaptation often observed in persons who have sustained a TBI. Furthermore, these data provide modest support for Damasio's Somatic Marker Theory in that physiological reactivity to socially-relevant information has some value in predicting social function. Therefore, the assessment of TBI persons, particularly those with adaptive behavioural problems, should be expanded to determine whether alterations in perception and reactivity to socially-relevant stimuli have occurred. When this is the case, rehabilitative strategies aimed more specifically at these difficulties should be considered.
Resumo:
The global wine industry is experiencing the impacts of climate change. Canada’s major wine sector, the Ontario Wine Industry (OWI) is no exception to this trend. Warmer winter and summer temperatures are affecting wine production. The industry needs to adapt to these challenges, but their capacity for this is unclear. To date, only a limited number of studies exist regarding the adaptive capacity of the wine industry to climate change. Accordingly, this study developed an adaptive capacity assessment framework for the wine industry. The OWI became the case study for the implementation of the assessment framework. Data was obtained by means of a questionnaire sent to grape growers, winemakers and supporting institutions in Ontario. The results indicated the OWI has adaptive capacity capabilities in financial, institutional, political, technological, perceptions, knowledge, diversity and social capital resources areas. Based on the OWI case study, this framework provides an effective means of assessing regional wine industries’ capacity to adapt to climate change.
Resumo:
There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.