837 resultados para Entropy of a sampling design
Resumo:
SD card (Secure Digital Memory Card) is widely used in portable storage medium. Currently, latest researches on SD card, are mainly SD card controller based on FPGA (Field Programmable Gate Array). Most of them are relying on API interface (Application Programming Interface), AHB bus (Advanced High performance Bus), etc. They are dedicated to the realization of ultra high speed communication between SD card and upper systems. Studies about SD card controller, really play a vital role in the field of high speed cameras and other sub-areas of expertise. This design of FPGA-based file systems and SD2.0 IP (Intellectual Property core) does not only exhibit a nice transmission rate, but also achieve the systematic management of files, while retaining a strong portability and practicality. The file system design and implementation on a SD card covers the main three IP innovation points. First, the combination and integration of file system and SD card controller, makes the overall system highly integrated and practical. The popular SD2.0 protocol is implemented for communication channels. Pure digital logic design based on VHDL (Very-High-Speed Integrated Circuit Hardware Description Language), integrates the SD card controller in hardware layer and the FAT32 file system for the entire system. Secondly, the document management system mechanism makes document processing more convenient and easy. Especially for small files in batch processing, it can ease the pressure of upper system to frequently access and process them, thereby enhancing the overall efficiency of systems. Finally, digital design ensures the superior performance. For transmission security, CRC (Cyclic Redundancy Check) algorithm is for data transmission protection. Design of each module is platform-independent of macro cells, and keeps a better portability. Custom integrated instructions and interfaces may facilitate easily to use. Finally, the actual test went through multi-platform method, Xilinx and Altera FPGA developing platforms. The timing simulation and debugging of each module was covered. Finally, Test results show that the designed FPGA-based file system IP on SD card can support SD card, TF card and Micro SD with 2.0 protocols, and the successful implementation of systematic management for stored files, and supports SD bus mode. Data read and write rates in Kingston class10 card is approximately 24.27MB/s and 16.94MB/s.
Resumo:
This paper describes a methodological proposal for the design, creation and evaluation of Learning Objects (LOs). This study arises from the compilation and analysis of several LO design methodologies currently used in Ibero-America. This proposal, which has been named DICREVOA, defines five different phases: analysis, design (instructional and multimedia), implementation (LO and metadata), evaluation (from the perspective of both the producer and the consumer of the LO), and publishing. The methodology focuses not only on the teaching inexperienced, but also on those having a basic understanding of the technological and educational aspects related to LO design; therefore, the study emphasizes LO design activities centered around the Kolb cycle and the use of the ExeLearning tool in order to implement the LO core. Additionally, DICREVOA was used in a case study, which demonstrates how it provides a feasible mechanism for LO design and implementation within different contexts. Finally, DICREVOA, the case study to which it was applied, and the results obtained are presented
Resumo:
The scleractinian coral Lophelia pertusa has been the focus of deep-sea research since the recognition of the vast extent of coral reefs in North Atlantic waters two decades ago, long after their existence was mentioned by fishermen. These reefs where shown to provide habitat, concentrate biomass and act as feeding or nursery grounds for many species, including those targeted by commercial fisheries. Thus, the attention given to this cold-water coral (CWC) species from researchers and the wider public has increased. Consequently, new research programs triggered research to determine the full extent of the corals geographic distribution and ecological dynamics of “Lophelia reefs”. The present study is based on a systematic standardised sampling design to analyse the distribution and coverage of CWC reefs along European margins from the Bay of Biscay to Iceland. Based on Remotely Operated Vehicle (ROV) image analysis, we report an almost systematic occurrence of Madrepora oculata in association with L. pertusa with similar abundances of both species within explored reefs, despite a tendency of increased abundance of L. pertusa compared to M. oculata toward higher latitudes. This systematic association occasionally reached the colony scale, with “twin” colonies of both species often observed growing next to each other when isolated structures were occurring off-reefs. Finally, several “false chimaera” were observed within reefs, confirming that colonial structures can be “coral bushes” formed by an accumulation of multiple colonies even at the inter-specific scale, with no need for self-recognition mechanisms. Thus, we underline the importance of the hitherto underexplored M. oculata in the Eastern Atlantic, re-establishing a more balanced view that both species and their yet unknown interactions are required to better elucidate the ecology, dynamics and fate of European CWC reefs in a changing environment.
Resumo:
This paper, based on the outcome of discussions at a NORMAN Network-supported workshop in Lyon (France) in November 2014 aims to provide a common position of passive sampling community experts regarding concrete actions required to foster the use of passive sampling techniques in support of contaminant risk assessment and management and for routine monitoring of contaminants in aquatic systems. The brief roadmap presented here focusses on the identification of robust passive sampling methodology, technology that requires further development or that has yet to be developed, our current knowledge of the evaluation of uncertainties when calculating a freely dissolved concentration, the relationship between data from PS and that obtained through biomonitoring. A tiered approach to identifying areas of potential environmental quality standard (EQS) exceedances is also shown. Finally, we propose a list of recommended actions to improve the acceptance of passive sampling by policy-makers. These include the drafting of guidelines, quality assurance and control procedures, developing demonstration projects where biomonitoring and passive sampling are undertaken alongside, organising proficiency testing schemes and interlaboratory comparison and, finally, establishing passive sampler-based assessment criteria in relation to existing EQS.
Resumo:
Since 2005, harmonized catch assessment surveys (CASs) have been implemented on Lake Victoria in the three riparian countries Uganda, Kenya, and Tanzania to monitor the commercial fish stocks and provide their management advice. The regionally harmonized standard operating procedures for CASs have not been wholly followed due to logistical difficulties. Yet the new approaches adopted have not been documented. This study investigated the alternative approaches used to estimate fish catches on the lake with the aim of determining the most reliable one for providing management advice and also the effect of current sampling routine on the precision of catch estimates provided. The study found the currently used lake-wide approach less reliable and more biased in providing catch estimates compared to the district based approach. Noticeable differences were detected in catch estimates between different months of the year. The study recommends future analyses of CAS data collected on the lake to follow the district based approach. Future CASs should also consider seasonal variations in the sampling design by providing for replication of sampling. The SOPs need updating to document the procedures that deviate from the original sampling design.
Resumo:
Assessing patterns of connectivity at the community and population levels is relevant to marine resource management and conservation. The present study reviews this issue with a focus on the western Indian Ocean (WIO) biogeographic province. This part of the Indian Ocean holds more species than expected from current models of global reef fish species richness. In this study, checklists of reef fish species were examined to determine levels of endemism in each of 10 biogeographic provinces of the Indian Ocean. Results showed that the number of endemic species was higher in the WIO than in any other region of the Indian Ocean. Endemic species from the WIO on the average had a larger body size than elsewhere in the tropical Indian Ocean. This suggests an effect of peripheral speciation, as previously documented in the Hawaiian reef fish fauna, relative to other sites in the tropical western Pacific. To explore evolutionary dynamics of species across biogeographic provinces and infer mechanisms of speciation, we present and compare the results of phylogeographic surveys based on compilations of published and unpublished mitochondrial DNA sequences for 19 Indo-Pacific reef-associated fishes (rainbow grouper Cephalopholis argus, scrawled butterflyfish Chaetodon meyeri, bluespot mullet Crenimugil sp. A, humbug damselfish Dascyllus abudafur/Dascyllus aruanus, areolate grouper Epinephelus areolatus, blacktip grouper Epinephelus fasciatus, honeycomb grouper Epinephelus merra, bluespotted cornetfish Fistularia commersonii, cleaner wrasse Labroides sp. 1, longface emperor Lethrinus sp. A, bluestripe snapper Lutjanus kasmira, unicornfishes Naso brevirosris, Naso unicornis and Naso vlamingii, blue-spotted maskray Neotrygon kuhlii, largescale mullet Planiliza macrolepis, common parrotfish Scarus psicattus, crescent grunter Terapon jarbua, whitetip reef shark Triaenodon obesus) and three coastal Indo-West Pacific invertebrates (blue seastar Linckia laevigata, spiny lobster Panulirus homarus, small giant clam Tridacna maxima). Heterogeneous and often unbalanced sampling design, paucity of data in a number of cases, and among-species discrepancy in phylogeographic structure precluded any generalization regarding phylogeographic patterns. Nevertheless, the WIO might have been a source of haplotypes in some cases and it also harboured an endemic clade in at least one case. The present survey also highlighted likely cryptic species. This may eventually affect the accuracy of the current checklists of species, which form the basis of some of the recent advances in Indo-West Pacific marine ecology and biogeography.
Resumo:
Salinity gradient power (SGP) is the energy that can be obtained from the mixing entropy of two solutions with a different salt concentration. River estuary, as a place for mixing salt water and fresh water, has a huge potential of this renewable energy. In this study, this potential in the estuaries of rivers leading to the Persian Gulf and the factors affecting it are analysis and assessment. Since most of the full water rivers are in the Asia, this continent with the potential power of 338GW is a second major source of energy from the salinity gradient power in the world (Wetsus institute, 2009). Persian Gulf, with the proper salinity gradient in its river estuaries, has Particular importance for extraction of this energy. Considering the total river flow into the Persian Gulf, which is approximately equal to 3486 m3/s, the amount of theoretical extractable power from salinity gradient in this region is 5.2GW. Iran, with its numerous rivers along the coast of the Persian Gulf, has a great share of this energy source. For example, with study calculations done on data from three hydrometery stations located on the Arvand River, Khorramshahr Station with releasing 1.91M/ energy which is obtained by combining 1.26m3 river water with 0.74 m3 sea water, is devoted to itself extracting the maximum amount of extractable energy. Considering the average of annual discharge of Arvand River in Khorramshahr hydrometery station, the amount of theoretical extractable power is 955 MW. Another part of parameters that are studied in this research, are the intrusion length of salt water and its flushing time in the estuary that have a significant influence on the salinity gradient power. According to the calculation done in conditions HWS and the average discharge of rivers, the maximum of salinity intrusion length in to the estuary of the river by 41km is related to Arvand River and the lowest with 8km is for Helle River. Also the highest rate of salt water flushing time in the estuary with 9.8 days is related to the Arvand River and the lowest with 3.3 days is for Helle River. Influence of these two parameters on reduces the amount of extractable energy from salinity gradient power as well as can be seen in the estuaries of the rivers studied. For example, at the estuary of the Arvand River in the interval 8.9 days, salinity gradient power decreases 9.2%. But another part of this research focuses on the design of a suitable system for extracting electrical energy from the salinity gradient. So far, five methods have been proposed to convert this energy to electricity that among them, reverse electro-dialysis (RED) method and pressure-retarded osmosis (PRO) method have special importance in practical terms. In theory both techniques generate the same amount of energy from given volumes of sea and river water with specified salinity; in practice the RED technique seems to be more attractive for power generation using sea water and river water. Because it is less necessity of salinity gradient to PRO method. In addition to this, in RED method, it does not need to use turbine to change energy and the electricity generation is started when two solutions are mixed. In this research, the power density and the efficiency of generated energy was assessment by designing a physical method. The physical designed model is an unicellular reverse electro-dialysis battery with nano heterogenic membrane has 20cmx20cm dimension, which produced power density 0.58 W/m2 by using river water (1 g NaCl/lit) and sea water (30 g NaCl/lit) in laboratorial condition. This value was obtained because of nano method used on the membrane of this system and suitable design of the cell which led to increase the yield of the system efficiency 11% more than non nano ones.
Resumo:
Background and Purpose: At least part of the failure in the transition from experimental to clinical studies in stroke has been attributed to the imprecision introduced by problems in the design of experimental stroke studies. Using a metaepidemiologic approach, we addressed the effect of randomization, blinding, and use of comorbid animals on the estimate of how effectively therapeutic interventions reduce infarct size. Methods: Electronic and manual searches were performed to identify meta-analyses that described interventions in experimental stroke. For each meta-analysis thus identified, a reanalysis was conducted to estimate the impact of various quality items on the estimate of efficacy, and these estimates were combined in a meta meta-analysis to obtain a summary measure of the impact of the various design characteristics. Results: Thirteen meta-analyses that described outcomes in 15 635 animals were included. Studies that included unblinded induction of ischemia reported effect sizes 13.1% (95% CI, 26.4% to 0.2%) greater than studies that included blinding, and studies that included healthy animals instead of animals with comorbidities overstated the effect size by 11.5% (95% CI, 21.2% to 1.8%). No significant effect was found for randomization, blinded outcome assessment, or high aggregate CAMARADES quality score. Conclusions: We provide empirical evidence of bias in the design of studies, with studies that included unblinded induction of ischemia or healthy animals overestimating the effectiveness of the intervention. This bias could account for the failure in the transition from bench to bedside of stroke therapies.
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
Resumo:
Ergonomics is intrinsically connected to political debates about the good society, about how we should live. This article follows the ideas of Colin Ward by setting the practices of ergonomics and design along a spectrum between more libertarian approaches and more authoritarian. Within Anglo-American ergonomics, more authoritarian approaches tend to prevail, often against the wishes of designers who have had to fight with their employers for best possible design outcomes. The article draws on debates about the design and manufacturing of schoolchildren's furniture. Ergonomics would benefit from embracing these issues to stimulate a broader discourse amongst its practitioners about how to be open to new disciplines, particularly those in the social sciences.
Resumo:
The bubble crab Dotilla fenestrata forms very dense populations on the sand flats of the eastern coast of Inhaca Island, Mozambique, making it an interesting biological model to examine spatial distribution patterns and test the relative efficiency of common sampling methods. Due to its apparent ecological importance within the sandy intertidal community, understanding the factors ruling the dynamics of Dotilla populations is also a key issue. In this study, different techniques of estimating crab density are described, and the trends of spatial distribution of the different population categories are shown. The studied populations are arranged in discrete patches located at the well-drained crests of nearly parallel mega sand ripples. For a given sample size, there was an obvious gain in precision by using a stratified random sampling technique, considering discrete patches as strata, compared to the simple random design. Density average and variance differed considerably among patches since juveniles and ovigerous females were found clumped, with higher densities at the lower and upper shore levels, respectively. Burrow counting was found to be an adequate method for large-scale sampling, although consistently underestimating actual crab density by nearly half. Regression analyses suggested that crabs smaller than 2.9 mm carapace width tend to be undetected in visual burrow counts. A visual survey of sampling plots over several patches of a large Dotilla population showed that crab density varied in an interesting oscillating pattern, apparently following the topography of the sand flat. Patches extending to the lower shore contained higher densities than those mostly covering the higher shore. Within-patch density variability also pointed to the same trend, but the density increment towards the lowest shore level varied greatly among the patches compared.
Resumo:
Coprime and nested sampling are well known deterministic sampling techniques that operate at rates significantly lower than the Nyquist rate, and yet allow perfect reconstruction of the spectra of wide sense stationary signals. However, theoretical guarantees for these samplers assume ideal conditions such as synchronous sampling, and ability to perfectly compute statistical expectations. This thesis studies the performance of coprime and nested samplers in spatial and temporal domains, when these assumptions are violated. In spatial domain, the robustness of these samplers is studied by considering arrays with perturbed sensor locations (with unknown perturbations). Simplified expressions for the Fisher Information matrix for perturbed coprime and nested arrays are derived, which explicitly highlight the role of co-array. It is shown that even in presence of perturbations, it is possible to resolve $O(M^2)$ under appropriate conditions on the size of the grid. The assumption of small perturbations leads to a novel ``bi-affine" model in terms of source powers and perturbations. The redundancies in the co-array are then exploited to eliminate the nuisance perturbation variable, and reduce the bi-affine problem to a linear underdetermined (sparse) problem in source powers. This thesis also studies the robustness of coprime sampling to finite number of samples and sampling jitter, by analyzing their effects on the quality of the estimated autocorrelation sequence. A variety of bounds on the error introduced by such non ideal sampling schemes are computed by considering a statistical model for the perturbation. They indicate that coprime sampling leads to stable estimation of the autocorrelation sequence, in presence of small perturbations. Under appropriate assumptions on the distribution of WSS signals, sharp bounds on the estimation error are established which indicate that the error decays exponentially with the number of samples. The theoretical claims are supported by extensive numerical experiments.
Design Optimization of Modern Machine-drive Systems for Maximum Fault Tolerant and Optimal Operation
Resumo:
Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.
Resumo:
In this thesis, we deal with the design of experiments in the drug development process, focusing on the design of clinical trials for treatment comparisons (Part I) and the design of preclinical laboratory experiments for proteins development and manufacturing (Part II). In Part I we propose a multi-purpose design methodology for sequential clinical trials. We derived optimal allocations of patients to treatments for testing the efficacy of several experimental groups by also taking into account ethical considerations. We first consider exponential responses for survival trials and we then present a unified framework for heteroscedastic experimental groups that encompasses the general ANOVA set-up. The very good performance of the suggested optimal allocations, in terms of both inferential and ethical characteristics, are illustrated analytically and through several numerical examples, also performing comparisons with other designs proposed in the literature. Part II concerns the planning of experiments for processes composed of multiple steps in the context of preclinical drug development and manufacturing. Following the Quality by Design paradigm, the objective of the multi-step design strategy is the definition of the manufacturing design space of the whole process and, as we consider the interactions among the subsequent steps, our proposal ensures the quality and the safety of the final product, by enabling more flexibility and process robustness in the manufacturing.
Resumo:
The aim of this work is to present a general overview of state-of-the-art related to design for uncertainty with a focus on aerospace structures. In particular, a simulation on a FCCZ lattice cell and on the profile shape of a nozzle will be performed. Optimization under uncertainty is characterized by the need to make decisions without complete knowledge of the problem data. When dealing with a complex problem, non-linearity, or optimization, two main issues are raised: the uncertainty of the feasibility of the solution and the uncertainty of the objective value of the function. In the first part, the Design Of Experiments (DOE) methodologies, Uncertainty Quantification (UQ), and then Uncertainty optimization will be deepened. The second part will show an application of the previous theories on through a commercial software. Nowadays multiobjective optimization on high non-linear problem can be a powerful tool to approach new concept solutions or to develop cutting-edge design. In this thesis an effective improvement have been reached on a rocket nozzle. Future work could include the introduction of multi scale modelling, multiphysics approach and every strategy useful to simulate as much possible real operative condition of the studied design.