925 resultados para Surgical technique and possible pitfalls


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current information is reviewed that provides clues to the intraspecific structure of dolphin species incidently killed in the yellowfin tuna purse-seine fishery of the eastern tropical Pacific (ETP). Current law requires that management efforts are focused on the intraspecific level, attempting to preserve local and presumably locally adapted populations. Four species are reviewed: pantropical spotted, Stenella attenuata; spinner, S. longirostTis; striped, S. coeruleoalba; and common, Delphinus delphis, dolphins. For each species, distributional, demographic, phenotypic, and genotypic data are summarized, and the putative stocks are categorized based on four hierarchal phylogeographic criteria relative to their probability of being evolutionarily significant units. For spotted dolphins, the morphological similarity of animals from the south and the west argues that stock designations (and boundaries) be changed from the current northern offshore and southern offshore to northeastern offshore and a combined western and southern offshore. For the striped dolphin, we find little reason to continue the present division into geographical stocks. For common dolphins, we reiterate an earlier recommendation that the long-beaked form (Baja neritic) and the northern short-beaked form be managed separately; recent morphological and genetic work provides evidence that they are probably separate species. Finally, we note that the stock structure of ETP spinner dolphins is complex, with the whitebelly form exhibiting characteristics of a hybrid swarm between the eastern and pantropical subspecies. There is little morphological basis at present for division of the whitebelly spinner dolphin into northern and southern stocks. However, we recommend continued separate management of the pooled whitebelly forms, despite their hybrid/intergrade status. Steps should be taken to ensure that management practices do not reduce the abundance of eastern relative to whitebelly spinner dolphins. To do so may lead to increased invasion of the eastern's stock range and possible replacement of the eastern spinner dolphin genome.(PDF file contains 24 pages.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.

Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.

An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.

As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In July 1994 an internationally coordinated and EU financed multidisciplinary research project about Baltic cod recruitment was started. The primary goals are to identify and describe dominant biotic and abiotic processes affecting the developmental success of early stages and the maturation of cod in the Central Baltic, to incorporate these processes into recruitment models in order to enhance prediction of future stock fluctuations due to environmental pertubations, species interactions and fisheries management directives as a prerequisite for an integrated fish stock assessment in the Central Baltic and to evaluate the feasibility and possible effects of stock enhancement programs on stock and recruitment and providing the biological basis for assessing their economic value.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.

Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.

However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.

It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.

With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose an efficient scheme to build an arbitrary multipartite Greenberger-Horne-Zeilinger state and discriminate all the universal Greenberger-Horne-Zeilinger states using parity measurement based on dipole-induced transparency in a cavity-waveguide system. A prominent advantage is that initial entangled states remain after nondetective identification and they can be used for successive tasks. We analyze the performance and possible errors of the required single-qubit rotations and emphasize that the scheme is reliable and can satisfy the current experimental technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Na Odontologia, ao se confeccionar peças restauradoras dentárias, pela técnica indireta, o processo mais rotineiramente empregado utiliza um modelo de gesso, obtido a partir de um molde de elastômero, tomado de um dente preparado. Vários fatores podem influenciar na boa precisão de ajustes destas peças como o escoamento do material de vazamento dentro da moldagem, a compatibilidade do material de vazamento com o da moldagem, o tempo de presa, a estabilidade dimensional, a resistência mecânica do material quando da separação moldagem/modelo, a resistência a abrasão e a fidelidade de reprodução de detalhes. Materiais foram introduzidos na odontologia para utilização na confecção de troquéis no intuito de minimizar as desvantagens do gesso, como baixa resistência a abrasão e ligeira expansão de presa. Dentre eles os troquéis metalizados e as resinas epóxicas, que tem vantagens em relação às propriedades mecânicas, porém o primeiro exige técnica demorada e de alto custo e o segundo apresenta contração. O presente trabalho se propõe a testar uma nova composição de poliéster insaturado com estireno adicionado ao carbonato de cálcio em diferentes proporções (10, 20, 30, 40, 50, 60 e 70%) e compará-la ao gesso tipo IV e a resina epoxídica com óxido de alumínio, através de ensaios mecânicos, de abrasão e de alteração dimensional, para avaliar a possibilidade de sua utilização como material de confecção de troquéis para a construção de restaurações indiretas. Para caracterização dos materiais foram feitas análises de espectrometria no infravermelho, Calorimetria de varredura diferencial, termogravimétrica e Microscopia eletrônica de varredura. O compósito a base de poliéster insaturado com 50% de carbonato de cálcio se mostrou viável para utilização como material para troquel. Quando comparado aos materiais de controle mostrou propriedades mecânicas próximas as da resina epoxídica e bem superiores ao gesso, resistência a abrasão superior ao gesso e inferior a resina epoxídica e alteração dimensional próxima a resina epoxídica e maior ao gesso. Sendo a formulação do poliéster/carbonato de cálcio apenas constituída de polímero, catalisador e carga, é possível melhorar a formulação modificando a carga e/ou acrescentando aditivos visando minimizar a contração de polimerização.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Morphogenesis is a phenomenon of intricate balance and dynamic interplay between processes occurring at a wide range of scales (spatial, temporal and energetic). During development, a variety of physical mechanisms are employed by tissues to simultaneously pattern, move, and differentiate based on information exchange between constituent cells, perhaps more than at any other time during an organism's life. To fully understand such events, a combined theoretical and experimental framework is required to assist in deciphering the correlations at both structural and functional levels at scales that include the intracellular and tissue levels as well as organs and organ systems. Microscopy, especially diffraction-limited light microscopy, has emerged as a central tool to capture the spatio-temporal context of life processes. Imaging has the unique advantage of watching biological events as they unfold over time at single-cell resolution in the intact animal. In this work I present a range of problems in morphogenesis, each unique in its requirements for novel quantitative imaging both in terms of the technique and analysis. Understanding the molecular basis for a developmental process involves investigating how genes and their products- mRNA and proteins-function in the context of a cell. Structural information holds the key to insights into mechanisms and imaging fixed specimens paves the first step towards deciphering gene function. The work presented in this thesis starts with the demonstration that the fluorescent signal from the challenging environment of whole-mount imaging, obtained by in situ hybridization chain reaction (HCR), scales linearly with the number of copies of target mRNA to provide quantitative sub-cellular mapping of mRNA expression within intact vertebrate embryos. The work then progresses to address aspects of imaging live embryonic development in a number of species. While processes such as avian cartilage growth require high spatial resolution and lower time resolution, dynamic events during zebrafish somitogenesis require higher time resolution to capture the protein localization as the somites mature. The requirements on imaging are even more stringent in case of the embryonic zebrafish heart that beats with a frequency of ~ 2-2.5 Hz, thereby requiring very fast imaging techniques based on two-photon light sheet microscope to capture its dynamics. In each of the hitherto-mentioned cases, ranging from the level of molecules to organs, an imaging framework is developed, both in terms of technique and analysis to allow quantitative assessment of the process in vivo. Overall the work presented in this thesis combines new quantitative tools with novel microscopy for the precise understanding of processes in embryonic development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ultrafast temporal pattern generation and recognition with femtosecond laser technology is presented, analyzed, and experimentally implemented. Ultrafast temporal pattern generation and recognition are realized by taking advantage of two well-known techniques: the space-time conversion technique and the ultrafast pulse measurement technique. Here the temporal pattern for the designed multiple pulses, optimized with a preassumed Gaussian spectral distribution of an ultrashort pulse, is described. With the simulation of a Gaussian spectral distribution, we realize that the uniformity of the generated multiple ultrafast temporal pulses is relevant to the repeated number of modulation periods in the mask in the spectral plane. Moreover, the change of Gaussian spectral phases with the wavelengths in the modulated phase plate is considered. Experiments of ultrafast temporal pattern recognition by the frequency-resolved optical gating (FROG) characterization technique are also given. (C) 2004 Society of Photo-Optical Instrumentation Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the behavior of granular materials at three length scales. At the smallest length scale, the grain-scale, we study inter-particle forces and "force chains". Inter-particle forces are the natural building blocks of constitutive laws for granular materials. Force chains are a key signature of the heterogeneity of granular systems. Despite their fundamental importance for calibrating grain-scale numerical models and elucidating constitutive laws, inter-particle forces have not been fully quantified in natural granular materials. We present a numerical force inference technique for determining inter-particle forces from experimental data and apply the technique to two-dimensional and three-dimensional systems under quasi-static and dynamic load. These experiments validate the technique and provide insight into the quasi-static and dynamic behavior of granular materials.

At a larger length scale, the mesoscale, we study the emergent frictional behavior of a collection of grains. Properties of granular materials at this intermediate scale are crucial inputs for macro-scale continuum models. We derive friction laws for granular materials at the mesoscale by applying averaging techniques to grain-scale quantities. These laws portray the nature of steady-state frictional strength as a competition between steady-state dilation and grain-scale dissipation rates. The laws also directly link the rate of dilation to the non-steady-state frictional strength.

At the macro-scale, we investigate continuum modeling techniques capable of simulating the distinct solid-like, liquid-like, and gas-like behaviors exhibited by granular materials in a single computational domain. We propose a Smoothed Particle Hydrodynamics (SPH) approach for granular materials with a viscoplastic constitutive law. The constitutive law uses a rate-dependent and dilation-dependent friction law. We provide a theoretical basis for a dilation-dependent friction law using similar analysis to that performed at the mesoscale. We provide several qualitative and quantitative validations of the technique and discuss ongoing work aiming to couple the granular flow with gas and fluid flows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the feed-forward back-propagation artificial neural network (BP-ANN) algorithm is introduced in the traditional Focus Calibration using Alignment procedure (FOCAL) technique, and a novel FOCAL technique based on BP-ANN is proposed. The effects of the parameters, such as the number of neurons on the hidden-layer and the number of training epochs, on the measurement accuracy are analyzed in detail. It is proved that the novel FOCAL technique based on BP-ANN is more reliable and it is a better choice for measurement of the image quality parameters. (c) 2005 Elsevier GmbH. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper summarized the recent research results of Changhe Zhou's group of Information Optics Lab in Shanghai Institute of Optics and Fine Mechanics (SIOM). The first is about the Talbot self-imaging research. We have found the symmetry rule, the regular-rearranged neighboring phase difference rule and the prime-number decamping rule, which is briefly summarized in a recent educational publication of Optics and Photonics News, pp.46-50, November 2004. The second is about four novel microoptical gratings designed and fabricated in SIOM. The third is about the design and fabrication of novel supperresolution phase plates for beam shaping and possible use in optical storage. The fourth is to develop novel femtosecond laser information processing techniques by incorporating microoptical elements, for example, use of a pair of reflective Dammann gratings for splitting the femtosecond laser pulses. The most attractive feature of this approach is that the conventional beam splitter is avoided. The conventional beam splitter would introduce the unequal dispersion due to the broadband spectrum of ultrashort laser pulses, which will affect the splitting result. We implemented the Dammann splitting apparatus by using two-layered reflective Dammann gratings, which generates the almost same array without angular dispersion. We believe that our device is highly interesting for splitting femtosecond laser pulses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Mössbauer technique has been used to study the nuclear hyperfine interactions and lifetimes in W182 (2+ state) and W183 (3/2- and 5/2- states) with the following results: g(5/2-)/g(2+) = 1.40 ± 0.04; g(3/2- = -0.07 ± 0.07; Q(5/2-)/Q(2+) = 0.94 ± 0.04; T1/2(3/2-) = 0.184 ± 0.005 nsec; T1/2(5/2-) >̰ 0.7 nsec. These quantities are discussed in terms of a rotation-particle interaction in W183 due to Coriolis coupling. From the measured quantities and additional information on γ-ray transition intensities magnetic single-particle matrix elements are derived. It is inferred from these that the two effective g-factors, resulting from the Nilsson-model calculation of the single-particle matrix elements for the spin operators ŝz and ŝ+, are not equal, consistent with a proposal of Bochnacki and Ogaza.

The internal magnetic fields at the tungsten nucleus were determined for substitutional solid solutions of tungsten in iron, cobalt, and nickel. With g(2+) = 0.24 the results are: |Heff(W-Fe)| = 715 ± 10 kG; |Heff(W-Co)| = 360 ± 10 kG; |Heff(W-Ni)| = 90 ± 25 kG. The electric field gradients at the tungsten nucleus were determined for WS2 and WO3. With Q(2+) = -1.81b the results are: for WS2, eq = -(1.86 ± 0.05) 1018 V/cm2; for WO3, eq = (1.54 ± 0.04) 1018 V/cm2 and ƞ = 0.63 ± 0.02.

The 5/2- state of Pt195 has also been studied with the Mössbauer technique, and the g-factor of this state has been determined to be -0.41 ± 0.03. The following magnetic fields at the Pt nucleus were found: in an Fe lattice, 1.19 ± 0.04 MG; in a Co lattice, 0.86 ± 0.03 MG; and in a Ni lattice, 0.36 ± 0.04 MG. Isomeric shifts have been detected in a number of compounds and alloys and have been interpreted to imply that the mean square radius of the Pt195 nucleus in the first-excited state is smaller than in the ground state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.

In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.

In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.

Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.

The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".