815 resultados para find it fast


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Part I of this thesis, a new magnetic spectrometer experiment which measured the β spectrum of ^(35)S is described. New limits on heavy neutrino emission in nuclear β decay were set, for a heavy neutrino mass range between 12 and 22 keV. In particular, this measurement rejects the hypothesis that a 17 keV neutrino is emitted, with sin^2 θ = 0.0085, at the 6δ statistical level. In addition, an auxiliary experiment was performed, in which an artificial kink was induced in the β spectrum by means of an absorber foil which masked a fraction of the source area. In this measurement, the sensitivity of the magnetic spectrometer to the spectral features of heavy neutrino emission was demonstrated.

In Part II, a measurement of the neutron spallation yield and multiplicity by the Cosmic-ray Underground Background Experiment is described. The production of fast neutrons by muons was investigated at an underground depth of 20 meters water equivalent, with a 200 liter detector filled with 0.09% Gd-loaded liquid scintillator. We measured a neutron production yield of (3.4 ± 0.7) x 10^(-5) neutrons per muon-g/cm^2, in agreement with other experiments. A single-to-double neutron multiplicity ratio of 4:1 was observed. In addition, stopped π^+ decays to µ^+ and then e^+ were observed as was the associated production of pions and neutrons, by the muon spallation interaction. It was seen that practically all of the π^+ produced by muons were also accompanied by at least one neutron. These measurements serve as the basis for neutron background estimates for the San Onofre neutrino detector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.

This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.

One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.

One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.

Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.

As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.

Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.

We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.

We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.

In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Crustal structure in Southern California is investigated using travel times from over 200 stations and thousands of local earthquakes. The data are divided into two sets of first arrivals representing a two-layer crust. The Pg arrivals have paths that refract at depths near 10 km and the Pn arrivals refract along the Moho discontinuity. These data are used to find lateral and azimuthal refractor velocity variations and to determine refractor topography.

In Chapter 2 the Pn raypaths are modeled using linear inverse theory. This enables statistical verification that static delays, lateral slowness variations and anisotropy are all significant parameters. However, because of the inherent size limitations of inverse theory, the full array data set could not be processed and the possible resolution was limited. The tomographic backprojection algorithm developed for Chapters 3 and 4 avoids these size problems. This algorithm allows us to process the data sequentially and to iteratively refine the solution. The variance and resolution for tomography are determined empirically using synthetic structures.

The Pg results spectacularly image the San Andreas Fault, the Garlock Fault and the San Jacinto Fault. The Mojave has slower velocities near 6.0 km/s while the Peninsular Ranges have higher velocities of over 6.5 km/s. The San Jacinto block has velocities only slightly above the Mojave velocities. It may have overthrust Mojave rocks. Surprisingly, the Transverse Ranges are not apparent at Pg depths. The batholiths in these mountains are possibly only surficial.

Pn velocities are fast in the Mojave, slow in Southern California Peninsular Ranges and slow north of the Garlock Fault. Pn anisotropy of 2% with a NWW fast direction exists in Southern California. A region of thin crust (22 km) centers around the Colorado River where the crust bas undergone basin and range type extension. Station delays see the Ventura and Los Angeles Basins but not the Salton Trough, where high velocity rocks underlie the sediments. The Transverse Ranges have a root in their eastern half but not in their western half. The Southern Coast Ranges also have a thickened crust but the Peninsular Ranges have no major root.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses a series of topics related to the question of how people find the foreground objects from complex scenes. With both computer vision modeling, as well as psychophysical analyses, we explore the computational principles for low- and mid-level vision.

We first explore the computational methods of generating saliency maps from images and image sequences. We propose an extremely fast algorithm called Image Signature that detects the locations in the image that attract human eye gazes. With a series of experimental validations based on human behavioral data collected from various psychophysical experiments, we conclude that the Image Signature and its spatial-temporal extension, the Phase Discrepancy, are among the most accurate algorithms for saliency detection under various conditions.

In the second part, we bridge the gap between fixation prediction and salient object segmentation with two efforts. First, we propose a new dataset that contains both fixation and object segmentation information. By simultaneously presenting the two types of human data in the same dataset, we are able to analyze their intrinsic connection, as well as understanding the drawbacks of today’s “standard” but inappropriately labeled salient object segmentation dataset. Second, we also propose an algorithm of salient object segmentation. Based on our novel discoveries on the connections of fixation data and salient object segmentation data, our model significantly outperforms all existing models on all 3 datasets with large margins.

In the third part of the thesis, we discuss topics around the human factors of boundary analysis. Closely related to salient object segmentation, boundary analysis focuses on delimiting the local contours of an object. We identify the potential pitfalls of algorithm evaluation for the problem of boundary detection. Our analysis indicates that today’s popular boundary detection datasets contain significant level of noise, which may severely influence the benchmarking results. To give further insights on the labeling process, we propose a model to characterize the principles of the human factors during the labeling process.

The analyses reported in this thesis offer new perspectives to a series of interrelating issues in low- and mid-level vision. It gives warning signs to some of today’s “standard” procedures, while proposing new directions to encourage future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.

Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.

To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The propagation of the fast magnetosonic wave in a tokamak plasma has been investigated at low power, between 10 and 300 watts, as a prelude to future heating experiments.

The attention of the experiments has been focused on the understanding of the coupling between a loop antenna and a plasma-filled cavity. Special emphasis has been given to the measurement of the complex loading impedance of the plasma. The importance of this measurement is that once the complex loading impedance of the plasma is known, a matching network can be designed so that the r.f. generator impedance can be matched to one of the cavity modes, thus delivering maximum power to the plasma. For future heating experiments it will be essential to be able to match the generator impedance to a cavity mode in order to couple the r.f. energy efficiently to the plasma.

As a consequence of the complex impedance measurements, it was discovered that the designs of the transmitting antenna and the impedance matching network are both crucial. The losses in the antenna and the matching network must be kept below the plasma loading in order to be able to detect the complex plasma loading impedance. This is even more important in future heating experiments, because the fundamental basis for efficient heating before any other consideration is to deliver more energy into the plasma than is dissipated in the antenna system.

The characteristics of the magnetosonic cavity modes are confirmed by three different methods. First, the cavity modes are observed as voltage maxima at the output of a six-turn receiving probe. Second, they also appear as maxima in the input resistance of the transmitting antenna. Finally, when the real and imaginary parts of the measured complex input impedance of the antenna are plotted in the complex impedance plane, the resulting curves are approximately circles, indicating a resonance phenomenon.

The observed plasma loading resistances at the various cavity modes are as high as 3 to 4 times the basic antenna resistance (~ .4 Ω). The estimated cavity Q’s were between 400 and 700. This means that efficient energy coupling into the tokamak and low losses in the antenna system are possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DC and transient measurements of space-charge-limited currents through alloyed and symmetrical n^+ν n^+ structures made of nominally 75 kΩcm ν-type silicon are studied before and after the introduction of defects by 14 MeV neutron radiation. In the transient measurements, the current response to a large turn-on voltage step is analyzed. Right after the voltage step is applied, the current transient reaches a value which we shall call "initial current" value. At longer times, the transient current decays from the initial current value if traps are present.

Before the irradiation, the initial current density-voltage characteristics J(V) agree quantitatively with the theory of trap-free space-charge-limited current in solids. We obtain for the electron mobility a temperature dependence which indicates that scattering due to impurities is weak. This is expected for the high purity silicon used. The drift velocity-field relationships for electrons at room temperature and 77°K, derived from the initial current density-voltage characteristics, are shown to fit the relationships obtained with other methods by other workers. The transient current response for t > 0 remains practically constant at the initial value, thus indicating negligible trapping.

Measurement of the initial (trap-free) current density-voltage characteristics after the irradiation indicates that the drift velocity-field relationship of electrons in silicon is affected by the radiation only at low temperature in the low field range. The effect is not sufficiently pronounced to be readily analyzed and no formal description of it is offered. In the transient response after irradiation for t > 0, the current decays from its initial value, thus revealing the presence of traps. To study these traps, in addition to transient measurements, the DC current characteristics were measured and shown to follow the theory of trap-dominated space-charge-limited current in solids. This theory was applied to a model consisting of two discrete levels in the forbidden band gap. Calculations and experiments agreed and the capture cross-sections of the trapping levels were obtained. This is the first experimental case known to us through which the flow of space-charge-limited current is so simply representable.

These results demonstrate the sensitivity of space-charge-limited current flow as a tool to detect traps and changes in the drift velocity-field relationship of carriers caused by radiation. They also establish that devices based on the mode of space-charge-limited current flow will be affected considerably by any type of radiation capable of introducing traps. This point has generally been overlooked so far, but is obviously quite significant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planets are assembled from the gas, dust, and ice in the accretion disks that encircle young stars. Ices of chemical compounds with low condensation temperatures (<200 K), the so-called volatiles, dominate the solid mass reservoir from which planetesimals are formed and are thus available to build the protoplanetary cores of gas/ice giant planets. It has long been thought that the regions near the condensation fronts of volatiles are preferential birth sites of planets. Moreover, the main volatiles in disks are also the main C-and O-containing species in (exo)planetary atmospheres. Understanding the distribution of volatiles in disks and their role in planet-formation processes is therefore of great interest.

This thesis addresses two fundamental questions concerning the nature of volatiles in planet-forming disks: (1) how are volatiles distributed throughout a disk, and (2) how can we use volatiles to probe planet-forming processes in disks? We tackle the first question in two complementary ways. We have developed a novel super-resolution method to constrain the radial distribution of volatiles throughout a disk by combining multi-wavelength spectra. Thanks to the ordered velocity and temperature profiles in disks, we find that detailed constraints can be derived even with spatially and spectrally unresolved data -- provided a wide range of energy levels are sampled. We also employ high-spatial resolution interferometric images at (sub)mm frequencies using the Atacama Large Millimeter Array (ALMA) to directly measure the radial distribution of volatiles.

For the second question, we combine volatile gas emission measurements with those of the dust continuum emission or extinction to understand dust growth mechanisms in disks and disk instabilities at planet-forming distances from the central star. Our observations and models support the idea that the water vapor can be concentrated in regions near its condensation front at certain evolutionary stages in the lifetime of protoplanetary disks, and that fast pebble growth is likely to occur near the condensation fronts of various volatile species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first part of this thesis combines Bolocam observations of the thermal Sunyaev-Zel’dovich (SZ) effect at 140 GHz with X-ray observations from Chandra, strong lensing data from the Hubble Space Telescope (HST), and weak lensing data from HST and Subaru to constrain parametric models for the distribution of dark and baryonic matter in a sample of six massive, dynamically relaxed galaxy clusters. For five of the six clusters, the full multiwavelength dataset is well described by a relatively simple model that assumes spherical symmetry, hydrostatic equilibrium, and entirely thermal pressure support. The multiwavelength analysis yields considerably better constraints on the total mass and concentration compared to analysis of any one dataset individually. The subsample of five galaxy clusters is used to place an upper limit on the fraction of pressure support in the intracluster medium (ICM) due to nonthermal processes, such as turbulent and bulk flow of the gas. We constrain the nonthermal pressure fraction at r500c to be less than 0.11 at 95% confidence, where r500c refers to radius at which the average enclosed density is 500 times the critical density of the Universe. This is in tension with state-of-the-art hydrodynamical simulations, which predict a nonthermal pressure fraction of approximately 0.25 at r500c for the clusters in this sample.

The second part of this thesis focuses on the characterization of the Multiwavelength Sub/millimeter Inductance Camera (MUSIC), a photometric imaging camera that was commissioned at the Caltech Submillimeter Observatory (CSO) in 2012. MUSIC is designed to have a 14 arcminute, diffraction-limited field of view populated with 576 spatial pixels that are simultaneously sensitive to four bands at 150, 220, 290, and 350 GHz. It is well-suited for studies of dusty star forming galaxies, galaxy clusters via the SZ Effect, and galactic star formation. MUSIC employs a number of novel detector technologies: broadband phased-arrays of slot dipole antennas for beam formation, on-chip lumped element filters for band definition, and Microwave Kinetic Inductance Detectors (MKIDs) for transduction of incoming light to electric signal. MKIDs are superconducting micro-resonators coupled to a feedline. Incoming light breaks apart Cooper pairs in the superconductor, causing a change in the quality factor and frequency of the resonator. This is read out as amplitude and phase modulation of a microwave probe signal centered on the resonant frequency. By tuning each resonator to a slightly different frequency and sending out a superposition of probe signals, hundreds of detectors can be read out on a single feedline. This natural capability for large scale, frequency domain multiplexing combined with relatively simple fabrication makes MKIDs a promising low temperature detector for future kilopixel sub/millimeter instruments. There is also considerable interest in using MKIDs for optical through near-infrared spectrophotometry due to their fast microsecond response time and modest energy resolution. In order to optimize the MKID design to obtain suitable performance for any particular application, it is critical to have a well-understood physical model for the detectors and the sources of noise to which they are susceptible. MUSIC has collected many hours of on-sky data with over 1000 MKIDs. This work studies the performance of the detectors in the context of one such physical model. Chapter 2 describes the theoretical model for the responsivity and noise of MKIDs. Chapter 3 outlines the set of measurements used to calibrate this model for the MUSIC detectors. Chapter 4 presents the resulting estimates of the spectral response, optical efficiency, and on-sky loading. The measured detector response to Uranus is compared to the calibrated model prediction in order to determine how well the model describes the propagation of signal through the full instrument. Chapter 5 examines the noise present in the detector timestreams during recent science observations. Noise due to fluctuations in atmospheric emission dominate at long timescales (less than 0.5 Hz). Fluctuations in the amplitude and phase of the microwave probe signal due to the readout electronics contribute significant 1/f and drift-type noise at shorter timescales. The atmospheric noise is removed by creating a template for the fluctuations in atmospheric emission from weighted averages of the detector timestreams. The electronics noise is removed by using probe signals centered off-resonance to construct templates for the amplitude and phase fluctuations. The algorithms that perform the atmospheric and electronic noise removal are described. After removal, we find good agreement between the observed residual noise and our expectation for intrinsic detector noise over a significant fraction of the signal bandwidth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O século XX foi marcado por significativa transformação social, que se refletiu em rápido aumento da expectativa de vida da população mundial. Nesse contexto, é cada vez mais significativa a parcela de mulheres que atingem a menopausa. Doenças cardiovasculares, que representam a principal causa de morte entre os adultos, e a osteoporose, apresentam uma relação nítida com a antecipação da menopausa, isto é, aquela que ocorre abaixo da média esperada para uma população. As pesquisas na área, antes praticamente relacionadas ao tratamento dos efeitos causados pelo climatério, se voltam cada vez mais para entender como os hábitos ou estilos de vida podem influenciar a fisiologia ovariana e, conseqüentemente, alterar o momento da menopausa. A relação com alguns destes hábitos, como o fumo, já apresenta forte embasamento na literatura. Entretanto, a correlação com o nível socioeconômico, seja pelas dificuldades de se medir adequadamente esse constructo, ou talvez pela quantidade insuficiente de trabalhos de qualidade, não se apresenta de forma tão evidente. O nível de escolaridade, considerado um dos melhores indicadores do nível socioeconômico, tanto pela maior facilidade de obtenção da informação, como pelo já demonstrado grau de associação com diversos desfechos em saúde, foi avaliado nesta revisão sistemática como fator de exposição para a antecipação da idade da menopausa. Este trabalho se alinha com a crescente tendência de se entender como os determinantes sociais podem influenciar nos desfechos em saúde, e de se buscar estratégias eficazes em prol da diminuição das desigualdades em saúde. A estratégia de busca eletrônica foi desenvolvida de forma específica para as diferentes bases (MEDLINE [PubMed] e LILACS) e através de consulta a referências cruzadas. Somente foram incluídos estudos observacionais pela natureza da questão, já que não seria possível, neste caso, a realização de estudos experimentais. Após a identificação inicial de 776 artigos, 40 deles foram selecionados para apreciação do texto completo. No final, esta revisão sistemática englobou 30 artigos, relatando resultados de 32 estudos. Como resultado, verificou-se que estudos que não demonstram associação significativa do nível de educação com a idade da menopausa formaram a maioria da amostra. A forma como nível de escolaridade foi medida e a metodologia para comparação entre os estratos se mostraram largamente heterogêneas. Não se encontraram evidências inequívocas sobre a existência de associação entre o nível de escolaridade e a idade da menopausa através desta revisão.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nas últimas décadas, o problema de escalonamento da produção em oficina de máquinas, na literatura referido como JSSP (do inglês Job Shop Scheduling Problem), tem recebido grande destaque por parte de pesquisadores do mundo inteiro. Uma das razões que justificam tamanho interesse está em sua alta complexidade. O JSSP é um problema de análise combinatória classificado como NP-Difícil e, apesar de existir uma grande variedade de métodos e heurísticas que são capazes de resolvê-lo, ainda não existe hoje nenhum método ou heurística capaz de encontrar soluções ótimas para todos os problemas testes apresentados na literatura. A outra razão basea-se no fato de que esse problema encontra-se presente no diaa- dia das indústrias de transformação de vários segmento e, uma vez que a otimização do escalonamento pode gerar uma redução significativa no tempo de produção e, consequentemente, um melhor aproveitamento dos recursos de produção, ele pode gerar um forte impacto no lucro dessas indústrias, principalmente nos casos em que o setor de produção é responsável por grande parte dos seus custos totais. Entre as heurísticas que podem ser aplicadas à solução deste problema, o Busca Tabu e o Multidão de Partículas apresentam uma boa performance para a maioria dos problemas testes encontrados na literatura. Geralmente, a heurística Busca Tabu apresenta uma boa e rápida convergência para pontos ótimos ou subótimos, contudo esta convergência é frequentemente interrompida por processos cíclicos e a performance do método depende fortemente da solução inicial e do ajuste de seus parâmetros. A heurística Multidão de Partículas tende a convergir para pontos ótimos, ao custo de um grande esforço computacional, sendo que sua performance também apresenta uma grande sensibilidade ao ajuste de seus parâmetros. Como as diferentes heurísticas aplicadas ao problema apresentam pontos positivos e negativos, atualmente alguns pesquisadores começam a concentrar seus esforços na hibridização das heurísticas existentes no intuito de gerar novas heurísticas híbridas que reúnam as qualidades de suas heurísticas de base, buscando desta forma diminuir ou mesmo eliminar seus aspectos negativos. Neste trabalho, em um primeiro momento, são apresentados três modelos de hibridização baseados no esquema geral das Heurísticas de Busca Local, os quais são testados com as heurísticas Busca Tabu e Multidão de Partículas. Posteriormente é apresentada uma adaptação do método Colisão de Partículas, originalmente desenvolvido para problemas contínuos, onde o método Busca Tabu é utilizado como operador de exploração local e operadores de mutação são utilizados para perturbação da solução. Como resultado, este trabalho mostra que, no caso dos modelos híbridos, a natureza complementar e diferente dos métodos Busca Tabu e Multidão de Partículas, na forma como são aqui apresentados, da origem à algoritmos robustos capazes de gerar solução ótimas ou muito boas e muito menos sensíveis ao ajuste dos parâmetros de cada um dos métodos de origem. No caso do método Colisão de Partículas, o novo algorítimo é capaz de atenuar a sensibilidade ao ajuste dos parâmetros e de evitar os processos cíclicos do método Busca Tabu, produzindo assim melhores resultados.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the evolution of localized blobs of swirling or buoyant fluid in an infinite, inviscid, electrically conducting fluid. We consider the three cases of a strong imposed magnetic field, a weak imposed magnetic field, and no magnetic field. For a swirling blob in the absence of a magnetic field, we find, in line with others, that the blob bursts radially outward under the action of the centrifugal force, forming a thin annular vortex sheet. A simple model of this process predicts that the vortex sheet thins exponentially fast and that it moves radially outward with constant velocity. These predictions are verified by high-resolution numerical simulations. When an intense magnetic field is applied, this phenomenon is suppressed, with the energy and angular momentum of the blob now diffusing axially along the magnetic field lines, converting the blob into a columnar structure. For modest or weak magnetic fields, there are elements of both types of behavior, with the radial bursting dominating over axial diffusion for weak fields. However, even when the magnetic field is very weak, the flow structure is quite distinct to that of the nonmagnetic case. In particular, a small but finite magnetic field places a lower bound on the thickness of the annular vortex sheet and produces an annulus of counter-rotating fluid that surrounds the vortex core. The behavior of the buoyant blob is similar. In the absence of a magnetic field, it rapidly develops the mushroomlike shape of a thermal, with a thin vortex sheet at the top and sides of the mushroom. Again, a simple model of this process predicts that the vortex sheet at the top of the thermal thins exponentially fast and rises with constant velocity. These predictions are consistent with earlier numerical simulations. Curiously, however, it is shown that the net vertical momentum associated with the blob increases linearly in time, despite the fact that the vertical velocity at the front of the thermal is constant. As with the swirling blob, an imposed magnetic field inhibits the formation of a vortex sheet. A strong magnetic field completely suppresses the phenomenon, replacing it with an axial diffusion of momentum, while a weak magnetic field allows the sheet to form, but places a lower bound on its thickness. The magnetic field does not, however, change the net vertical momentum of the blob, which always increases linearly with time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is essential to monitor deteriorated civil engineering structures cautiously to detect symptoms of their serious disruptions. A wireless sensor network can be an effective system for monitoring civil engineering structures. It is fast to deploy sensors especially in difficult-to-access areas, and it is extendable without any cable extensions. Since our target is to monitor deteriorations of civil engineering structures such as cracks at tunnel linings, most of the locations of sensors are known, and sensors are not required to move dynamically. Therefore, we focus on developing a deployment plan of a static network in order to reduce the value of a cost function such as initial installation cost and summation of communication distances of the network. The key issue of the deployment is the location of relays that forward sensing data from sensors to a data collection device called a gateway. In this paper, we propose a relay deployment-planning tool that can be used to design a wireless sensor network for monitoring civil engineering structures. For the planning tool, we formalize the model and implement a local search based algorithm to find a quasi-optimal solution. Our solution guarantees two routings from a sensor to a gateway, which can provide higher reliability of the network. We also show the application of our experimental tool to the actual environment in the London Underground.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-throughput DNA sequencing (HTS) instruments today are capable of generating millions of sequencing reads in a short period of time, and this represents a serious challenge to current bioinformatics pipeline in processing such an enormous amount of data in a fast and economical fashion. Modern graphics cards are powerful processing units that consist of hundreds of scalar processors in parallel in order to handle the rendering of high-definition graphics in real-time. It is this computational capability that we propose to harness in order to accelerate some of the time-consuming steps in analyzing data generated by the HTS instruments. We have developed BarraCUDA, a novel sequence mapping software that utilizes the parallelism of NVIDIA CUDA graphics cards to map sequencing reads to a particular location on a reference genome. While delivering a similar mapping fidelity as other mainstream programs , BarraCUDA is a magnitude faster in mapping throughput compared to its CPU counterparts. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the mapping throughput. BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the mapping of millions of sequencing reads generated by HTS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available at http://seqbarracuda.sf.net

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nuclear power generation offers a reliable, low-impact and large-scale alternative to fossil fuels. However, concerns exist over the safety and sustainability of this method of power production, and it remains unpopular with some governments and pressure groups throughout the world. Fast thorium fuelled accelerator-driven sub-critical reactors (ADSRs) offer a possible route to providing further re-assurance regarding these concerns on account of their properties of enhanced safety through sub-critical operation combined with reduced actinide waste production from the thorium fuel source. The appropriate sub-critical margin at which these reactors should operate is the subject of continued debate. Commercial interests favour a small sub-critical margin in order to minimise the size of the accelerator needed for a given power output, whilst enhanced safety would be better satisfied through larger sub-critical margins to further minimise the possibility of a criticality excursion. Against this background, this paper examines some of the issues affecting reactor safety inherent within thorium fuel sources resulting from the essential Th90232→Th90233→Pa91233→U92233 breeding chain. Differences in the decay half-lives and fission and capture cross-sections of 233Pa and 233U can result in significant changes in the reactivity of the fuel following changes in the reactor power. Reactor operation is represented using a homogeneous lumped fast reactor model that can simulate the evolution of actinides and reactivity variations to first-order accuracy. The reactivity of the fuel is shown to increase significantly following a loss of power to the accelerator. Where the sub-critical operating margins are small this can result in a criticality excursion unless some form of additional intervention is made, for example through the insertion of control rods. © 2012 Elsevier Ltd. All rights reserved.