988 resultados para Spatial Resolving Power
Resumo:
We present a new Ultra Wide Band (UWB) Timed- Array Transmitter System with Beamforming capability for high-resolution remote acquisition of vital signals. The system consists of four identical channels, where each is formed of a serial topology with three modules: programmable delay circuit (PDC or τ), a novel UWB 5th Gaussian Derivative order pulse generator circuit (PG), and a planar Vivaldi antenna. The circuit was designed using 0.18μm CMOS standard process and the planar antenna array was designed with filmconductor on Rogers RO3206 substrate. Spice simulations results showed the pulse generation with 104 mVpp amplitude and 500 ps width. The power consumption is 543 μW, and energy consumption 0.27 pJ per pulse using a 2V power supply at a pulse repetition rate (PRR) of 100 MHz. Electromagnetic simulations results, using CST Microwave (MW) Studio 2011, showed the main lobe radiation with a gain maximum of 13.2 dB, 35.5º x 36.7º angular width, and a beam steering between 17º and -11º for azimuthal (θ) angles and 17º and -18º for elevation (φ) angles at the center frequency of 6 GHz
Resumo:
[EN]Spatial variability of wave energy resource around the coastal waters of the Canary Archipelago is assessed by using a long-term data set derived by means of hindcasting techniques. Results revea( the existence of large differences in the energetic content available in different zones of the archipelago, mainly during spring and autumn. Areas with a higher wave power leve( are the north edge of Lanzarote, western side of Lanzarote and Fuerteventura, north and northwest in La Palma and El Hierro, as well as the north coast of Tenerife. The available energy potential slightly decreases in the north side of Gran Canaria and La Gomera.
Resumo:
In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Survival during the early life stages of marine species, including nearshore temperate reef fishes, is typically very low, and small changes in mortality rates, due to physiological and environmental conditions, can have marked effects on survival of a cohort and, on a larger scale, on the success of a recruitment season. Moreover, trade offs between larval growth and accumulation of energetic resources prior to settlement are likely to influence growth and survival until this critical period and afterwards. Rockfish recruitment rates are notoriously variable between years and across geographic locations. Monitoring of rates of onshore delivery of pelagic juveniles (defined here as settlement) of two species of nearshore rockfishes, Sebastes caurinus and Sebastes carnatus, was done between 2003-2009 years using artificial collectors placed at San Miguel and Santa Cruz Island, off Southern California coast. I investigated spatiotemporal variation in settlement rate, lipid content, pelagic larval duration and larval growth of the newly settled fishes; I assessed relationships between birth date, larval growth, early life-history characteristics and lipid content at settlement, considering also interspecific differences; finally, I attempt to relate interannual patterns of settlement and of early life history traits to easily accessible, local and regional indices of ocean conditions including in situ ocean temperature and regional upwelling, sea surface temperature (SST) and Chlorophyll-a (Chl-a) concentration. Spatial variations appeared to be of low relevance, while significant interannual differences were detected in settlement rate, pelagic larval duration and larval growth. The amount of lipid content of the newly settled fishes was highly variable in space and time, but did not differ between the two species and did not show any relationships with early life history traits, indicating that no trade off involved these physiological processes or they were masked by high individual variability in different periods of larval life. Significant interspecific differences were found in the timing of parturition and settlement and in larval growth rates, with S. carnatus growing faster and breeding and settling later than S. caurinus. The two species exhibited also different patterns of correlations between larval growth rates and larval duration. S. carnatus larval duration was longer when the growth in the first two weeks post-hatch was faster, while S. caurinus had a shorter larval duration when grew fast in the middle and in the end of larval life, suggesting different larval strategies. Fishes with longer larval durations were longer in size at settlement and exhibited longer planktonic phase in periods of favourable environmental conditions. Ocean conditions had a low explanatory power for interannual variation in early life history traits, but a very high explanatory power for settlement fluctuations, with regional upwelling strength being the principal indicator. Nonetheless, interannual variability in larval duration and growth were related to great phenological changes in upwelling happened during the period of this study and that caused negative consequences at all trophic levels along the California coast. Despite the low explanatory power of the environmental variables used in this study on the variation of larval biological traits, environmental processes were differently related with early life history characteristics analyzed to species, indicating possible species-specific susceptibility to ocean conditions and local environmental adaptation, which should be further investigated. These results have implications for understanding the processes influencing larval and juvenile survival, and consequently recruitment variability, which may be dependent on biological characteristics and environmental conditions.
Resumo:
Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.
Resumo:
The objective of this thesis is the power transient analysis concerning experimental devices placed within the reflector of Jules Horowitz Reactor (JHR). Since JHR material testing facility is designed to achieve 100 MW core thermal power, a large reflector hosts fissile material samples that are irradiated up to total relevant power of 3 MW. MADISON devices are expected to attain 130 kW, conversely ADELINE nominal power is of some 60 kW. In addition, MOLFI test samples are envisaged to reach 360 kW for what concerns LEU configuration and up to 650 kW according to HEU frame. Safety issues concern shutdown transients and need particular verifications about thermal power decreasing of these fissile samples with respect to core kinetics, as far as single device reactivity determination is concerned. Calculation model is conceived and applied in order to properly account for different nuclear heating processes and relative time-dependent features of device transients. An innovative methodology is carried out since flux shape modification during control rod insertions is investigated regarding the impact on device power through core-reflector coupling coefficients. In fact, previous methods considering only nominal core-reflector parameters are then improved. Moreover, delayed emissions effect is evaluated about spatial impact on devices of a diffuse in-core delayed neutron source. Delayed gammas transport related to fission products concentration is taken into account through evolution calculations of different fuel compositions in equilibrium cycle. Provided accurate device reactivity control, power transients are then computed for every sample according to envisaged shutdown procedures. Results obtained in this study are aimed at design feedback and reactor management optimization by JHR project team. Moreover, Safety Report is intended to utilize present analysis for improved device characterization.
Resumo:
Alveolar echinococcosis (AE)--caused by the cestode Echinococcus multilocularis--is a severe zoonotic disease found in temperate and arctic regions of the northern hemisphere. Even though the transmission patterns observed in different geographical areas are heterogeneous, the nuclear and mitochondrial targets usually used for the genotyping of E. multilocularis have shown only a marked genetic homogeneity in this species. We used microsatellite sequences, because of their high typing resolution, to explore the genetic diversity of E. multilocularis. Four microsatellite targets (EmsJ, EmsK, and EmsB, which were designed in our laboratory, and NAK1, selected from the literature) were tested on a panel of 76 E. multilocularis samples (larval and adult stages) obtained from Alaska, Canada, Europe, and Asia. Genetic diversity for each target was assessed by size polymorphism analysis. With the EmsJ and EmsK targets, two alleles were found for each locus, yielding two and three genotypes, respectively, discriminating European isolates from the other groups. With NAK1, five alleles were found, yielding seven genotypes, including those specific to Tibetan and Alaskan isolates. The EmsB target, a tandem repeated multilocus microsatellite, found 17 alleles showing a complex pattern. Hierarchical clustering analyses were performed with the EmsB findings, and 29 genotypes were identified. Due to its higher genetic polymorphism, EmsB exhibited a higher discriminatory power than the other targets. The complex EmsB pattern was able to discriminate isolates on a regional and sectoral level, while avoiding overdistinction. EmsB will be used to assess the putative emergence of E. multilocularis in Europe.
Resumo:
Geospatial information systems are used to analyze spatial data to provide decision makers with relevant, up-to-date, information. The processing time required for this information is a critical component to response time. Despite advances in algorithms and processing power, we still have many “human-in-the-loop” factors. Given the limited number of geospatial professionals, analysts using their time effectively is very important. The automation and faster humancomputer interactions of common tasks that will not disrupt their workflow or attention is something that is very desirable. The following research describes a novel approach to increase productivity with a wireless, wearable, electroencephalograph (EEG) headset within the geospatial workflow.
Resumo:
Free space optical (FSO) communication links can experience extreme signal degradation due to atmospheric turbulence induced spatial and temporal irradiance fuctuations (scintillation) in the laser wavefront. In addition, turbulence can cause the laser beam centroid to wander resulting in power fading, and sometimes complete loss of the signal. Spreading of the laser beam and jitter are also artifacts of atmospheric turbulence. To accurately predict the signal fading that occurs in a laser communication system and to get a true picture of how this affects crucial performance parameters like bit error rate (BER) it is important to analyze the probability density function (PDF) of the integrated irradiance fuctuations at the receiver. In addition, it is desirable to find a theoretical distribution that accurately models these ?uctuations under all propagation conditions. The PDF of integrated irradiance fuctuations is calculated from numerical wave-optic simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to very strong. Our results show that the gamma-gamma PDF provides a good fit to the simulated data distribution for all aperture sizes studied from weak through moderate scintillation. In strong scintillation, the gamma-gamma PDF is a better fit to the distribution for point-like apertures and the lognormal PDF is a better fit for apertures the size of the atmospheric spatial coherence radius ρ0 or larger. In addition, the PDF of received power from a Gaussian laser beam, which has been adaptively compensated at the transmitter before propagation to the receiver of a FSO link in the moderate scintillation regime is investigated. The complexity of the adaptive optics (AO) system is increased in order to investigate the changes in the distribution of the received power and how this affects the BER. For the 10 km link, due to the non-reciprocal nature of the propagation path the optimal beam to transmit is unknown. These results show that a low-order level of complexity in the AO provides a better estimate for the optimal beam to transmit than a higher order for non-reciprocal paths. For the 20 km link distance it was found that, although minimal, all AO complexity levels provided an equivalent improvement in BER and that no AO complexity provided the correction needed for the optimal beam to transmit. Finally, the temporal power spectral density of received power from a FSO communication link is investigated. Simulated and experimental results for the coherence time calculated from the temporal correlation function are presented. Results for both simulation and experimental data show that the coherence time increases as the receiving aperture diameter increases. For finite apertures the coherence time increases as the communication link distance is increased. We conjecture that this is due to the increasing speckle size within the pupil plane of the receiving aperture for an increasing link distance.
Resumo:
The present chapter gives a comprehensive introduction into the display and quantitative characterization of scalp field data. After introducing the construction of scalp field maps, different interpolation methods, the effect of the recording reference and the computation of spatial derivatives are discussed. The arguments raised in this first part have important implications for resolving a potential ambiguity in the interpretation of differences of scalp field data. In the second part of the chapter different approaches for comparing scalp field data are described. All of these comparisons can be interpreted in terms of differences of intracerebral sources either in strength, or in location and orientation in a nonambiguous way. In the present chapter we only refer to scalp field potentials, but mapping also can be used to display other features, such as power or statistical values. However, the rules for comparing and interpreting scalp field potentials might not apply to such data. Generic form of scalp field data Electroencephalogram (EEG) and event-related potential (ERP) recordings consist of one value for each sample in time and for each electrode. The recorded EEG and ERP data thus represent a two-dimensional array, with one dimension corresponding to the variable “time” and the other dimension corresponding to the variable “space” or electrode. Table 2.1 shows ERP measurements over a brief time period. The ERP data (averaged over a group of healthy subjects) were recorded with 19 electrodes during a visual paradigm. The parietal midline Pz electrode has been used as the reference electrode.
Resumo:
Regional and rural development policies in Europe increasingly emphasize entrepreneurship to mobilize the endogenous economic potential of rural territories. This study develops a concept to quantify entrepreneurship as place-dependent local potential to examine its impact on the local economic performance of rural territories in Switzerland. The short-to-medium-term impact of entrepreneurship on the economic performance of 1706 rural municipalities in Switzerland is assessed by applying three spatial random effects models. Results suggest a generally positive relationship between entrepreneurship and local development: rural municipalities with higher entrepreneurial potential generally show higher business tax revenues per capita and a lower share of social welfare cases among the population, although the impact on local employment is less clear. The explanatory power of entrepreneurship in all three models, however, was only moderate. This finding suggests that political expectations of fostering entrepreneurship to boost endogenous rural development in the short-to-medium term should be damped.
Resumo:
Bone-anchored hearing implants (BAHI) are routinely used to alleviate the effects of the acoustic head shadow in single-sided sensorineural deafness (SSD). In this study, the influence of the directional microphone setting and the maximum power output of the BAHI sound processor on speech understanding in noise in a laboratory setting were investigated. Eight adult BAHI users with SSD participated in this pilot study. Speech understanding in noise was measured using a new Slovak speech-in-noise test in two different spatial settings, either with noise coming from the front and noise from the side of the BAHI (S90N0) or vice versa (S0N90). In both spatial settings, speech understanding was measured without a BAHI, with a Baha BP100 in omnidirectional mode, with a BP100 in directional mode, with a BP110 power in omnidirectional and with a BP110 power in directional mode. In spatial setting S90N0, speech understanding in noise with either sound processor and in either directional mode was improved by 2.2-2.8 dB (p = 0.004-0.016). In spatial setting S0N90, speech understanding in noise was reduced by either BAHI, but was significantly better by 1.0-1.8 dB, if the directional microphone system was activated (p = 0.046), when compared to the omnidirectional setting. With the limited number of subjects in this study, no statistically significant differences were found between the two sound processors.
Resumo:
We invoke the ideal of tolerance in response to conflict, but what does it mean to answer conflict with a call for tolerance? Is tolerance a way of resolving conflicts or a means of sustaining them? Does it transform conflicts into productive tensions, or does it perpetuate underlying power relations? To what extent does tolerance hide its involvement with power and act as a form of depoliticization? Wendy Brown and Rainer Forst debate the uses and misuses of tolerance, an exchange that highlights the fundamental differences in their critical practice despite a number of political similarities. Both scholars address the normative premises, limits, and political implications of various conceptions of tolerance. Brown offers a genealogical critique of contemporary discourses on tolerance in Western liberal societies, focusing on their inherent ties to colonialism and imperialism, and Forst reconstructs an intellectual history of tolerance that attempts to redeem its political virtue in democratic societies. Brown and Forst work from different perspectives and traditions, yet they each remain wary of the subjection and abnegation embodied in toleration discourses, among other issues. The result is a dialogue rich in critical and conceptual reflections on power, justice, discourse, rationality, and identity.
Resumo:
One of the current challenges in evolutionary ecology is understanding the long-term persistence of contemporary-evolving predator–prey interactions across space and time. To address this, we developed an extension of a multi-locus, multi-trait eco-evolutionary individual-based model that incorporates several interacting species in explicit landscapes. We simulated eco-evolutionary dynamics of multiple species food webs with different degrees of connectance across soil-moisture islands. A broad set of parameter combinations led to the local extinction of species, but some species persisted, and this was associated with (1) high connectance and omnivory and (2) ongoing evolution, due to multi-trait genetic variability of the embedded species. Furthermore, persistence was highest at intermediate island distances, likely because of a balance between predation-induced extinction (strongest at short island distances) and the coupling of island diversity by top predators, which by travelling among islands exert global top-down control of biodiversity. In the simulations with high genetic variation, we also found widespread trait evolutionary changes indicative of eco-evolutionary dynamics. We discuss how the ever-increasing computing power and high-resolution data availability will soon allow researchers to start bridging the in vivo–in silico gap.