982 resultados para certificate signatures


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We measure the effects of phonon confinement on the Raman spectra of silicon nanowires (SiNWs). We show how previous reports of phonon confinement in SiNWs and nanostructures are actually inconsistent with phonon confinement, but are due to the intense local heating caused by the laser power used for Raman measurements. This is peculiar to nanostructures, and would require orders of magnitude higher power in bulk Si. By varying the temperature, power and excitation energy, we identify the contributions of pure confinement, heating and carrier photo-excitation. After eliminating laser-related effects, the Raman spectra show confinement signatures typical of quantum wires. © 2003 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES] El marco teórico del trabajo tiene como finalidad explicar los procesos de cooperación diseñados para las microempresas, con el propósito de establecer propuestas y argumentos para la formación de acuerdos, identificar los factores que afectan el desarrollo de los mismos, así como los sistemas de innovación nacional, regional y local para la animación de la cooperación entre firmas. Con ello, se propone una metodología para promover la cooperación entre microempresas, coordinada por agentes locales y la administración pública de los proyectos Micro y e-Micro (2002-2007) de la ciudad de Murcia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By introducing the OneFile e–portfolio system, the Motor Vehicle department at Huntingdonshire Regional College has revolutionised its teaching and learning delivery, improved organisational efficiency, and helped students achieve qualifications quicker. Bundles of paper have been scrapped, students can now upload video and audio as reflective evidence, and assessors no longer have to go on unnecessary visits to collect papers and signatures. What's more, the department has saved time and money, diversified learning and made an impact on its carbon footprint.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Without knowledge of basic seafloor characteristics, the ability to address any number of critical marine and/or coastal management issues is diminished. For example, management and conservation of essential fish habitat (EFH), a requirement mandated by federally guided fishery management plans (FMPs), requires among other things a description of habitats for federally managed species. Although the list of attributes important to habitat are numerous, the ability to efficiently and effectively describe many, and especially at the scales required, does not exist with the tools currently available. However, several characteristics of seafloor morphology are readily obtainable at multiple scales and can serve as useful descriptors of habitat. Recent advancements in acoustic technology, such as multibeam echosounding (MBES), can provide remote indication of surficial sediment properties such as texture, hardness, or roughness, and further permit highly detailed renderings of seafloor morphology. With acoustic-based surveys providing a relatively efficient method for data acquisition, there exists a need for efficient and reproducible automated segmentation routines to process the data. Using MBES data collected by the Olympic Coast National Marine Sanctuary (OCNMS), and through a contracted seafloor survey, we expanded on the techniques of Cutter et al. (2003) to describe an objective repeatable process that uses parameterized local Fourier histogram (LFH) texture features to automate segmentation of surficial sediments from acoustic imagery using a maximum likelihood decision rule. Sonar signatures and classification performance were evaluated using video imagery obtained from a towed camera sled. Segmented raster images were converted to polygon features and attributed using a hierarchical deep-water marine benthic classification scheme (Greene et al. 1999) for use in a geographical information system (GIS). (PDF contains 41 pages.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study examined the sustainability of various indigenous technologies in post-harvest fishery operation in Edo and Delta States (Nigeria). A total of seventy processors were interviewed during the survey through a random selection. The data obtained were analysed by descriptive statistics. The results obtained revealed that the majority of the fish processors within the study areas were married with women who were not educated beyond the first Leaving School Certificate. Most of the fish processed were bought fresh, while the commonest method of preservation/processing practiced was smoking. The type of processing equipment used was the Chorkor smoking kiln and the drum smoker while the commonest source of energy is firewood. The processing activities within the communities were found to be profitable. However it was observed that due to the high cost of processing materials and equipment, the economic growth and the living standard is quite low. Some recommendations were made to improve the traditional method of fish preservation and processing

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microbes have profoundly influenced the Earth’s environments through time. Records of these interactions come primarily from the development and implementation of proxies that relate known modern processes to chemical signatures in the sedimentary record. This thesis is presented in two parts, focusing first on novel proxy development in the modern and second on interpretation of past environments using well-established methods. Part 1, presented in two chapters, builds on previous observations that different microbial metabolisms produce vastly different lipid hydrogen isotopic compositions. Chapter 1 evaluates the potential environmental expression of metabolism-based fractionation differences by exploiting the natural microbial community gradients in hydrothermal springs. We find a very large range in isotopic composition that can be demonstrably linked to the microbial source(s) of the fatty acids at each sample site. In Chapter 2, anaerobic culturing techniques are used to evaluate the hydrogen isotopic fractionations produced by anaerobic microbial metabolisms. Although the observed fractionation patterns are similar to those reported for aerobic cultures for some organisms, others show large differences. Part 2 changes focus from the modern to the ancient and uses classical stratigraphic methods combined with isotope stratigraphy to interpret microbial and environmental changes during the latest Precambrian Era. Chapter 3 presents a detailed characterization of the facies, parasequence development, and stratigraphic architecture of the Ediacaran Khufai Formation. Chapter 4 presents measurements of carbon, oxygen, and sulfur isotopic ratios in stratigraphic context. Large oscillations in the isotopic composition of sulfate constrain the size of the marine sulfate reservoir and suggest incorporation of an enriched isotopic source. Because this data was measured in stratigraphic context, we can assert with confidence that these isotopic shifts are not related to stratigraphic surfaces or facies type but instead reflect the evolution of the ocean through time. This data integrates into the chemostratigraphic global record and contributes to the emerging picture of changing marine chemistry during the latest Precambrian Era.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the nonlinear propagation of ultrashort pulses on resonant intersubband transitions in multiple semiconductor quantum wells. It is shown that the nonlinearity rooted from electron-electron interactions destroys the condition giving rise to self-induced transparency. However, by adjusting the area of input pulse, we find the signatures of self-induced transmission due to a full Rabi flopping of the electron density, and this phenomenon can be approximately interpreted by the traditional standard area theorem via defining the effective area of input pulse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.

Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.

The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thrust fault earthquakes are investigated in the laboratory by generating dynamic shear ruptures along pre-existing frictional faults in rectangular plates. A considerable body of evidence suggests that dip-slip earthquakes exhibit enhanced ground motions in the acute hanging wall wedge as an outcome of broken symmetry between hanging and foot wall plates with respect to the earth surface. To understand the physical behavior of thrust fault earthquakes, particularly ground motions near the earth surface, ruptures are nucleated in analog laboratory experiments and guided up-dip towards the simulated earth surface. The transient slip event and emitted radiation mimic a natural thrust earthquake. High-speed photography and laser velocimeters capture the rupture evolution, outputting a full-field view of photo-elastic fringe contours proportional to maximum shearing stresses as well as continuous ground motion velocity records at discrete points on the specimen. Earth surface-normal measurements validate selective enhancement of hanging wall ground motions for both sub-Rayleigh and super-shear rupture speeds. The earth surface breaks upon rupture tip arrival to the fault trace, generating prominent Rayleigh surface waves. A rupture wave is sensed in the hanging wall but is, however, absent from the foot wall plate: a direct consequence of proximity from fault to seismometer. Signatures in earth surface-normal records attenuate with distance from the fault trace. Super-shear earthquakes feature greater amplitudes of ground shaking profiles, as expected from the increased tectonic pressures required to induce super-shear transition. Paired stations measure fault parallel and fault normal ground motions at various depths, which yield slip and opening rates through direct subtraction of like components. Peak fault slip and opening rates associated with the rupture tip increase with proximity to the fault trace, a result of selective ground motion amplification in the hanging wall. Fault opening rates indicate that the hanging and foot walls detach near the earth surface, a phenomenon promoted by a decrease in magnitude of far-field tectonic loads. Subsequent shutting of the fault sends an opening pulse back down-dip. In case of a sub-Rayleigh earthquake, feedback from the reflected S wave re-ruptures the locked fault at super-shear speeds, providing another mechanism of super-shear transition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The works presented in this thesis explore a variety of extensions of the standard model of particle physics which are motivated by baryon number (B) and lepton number (L), or some combination thereof. In the standard model, both baryon number and lepton number are accidental global symmetries violated only by non-perturbative weak effects, though the combination B-L is exactly conserved. Although there is currently no evidence for considering these symmetries as fundamental, there are strong phenomenological bounds restricting the existence of new physics violating B or L. In particular, there are strict limits on the lifetime of the proton whose decay would violate baryon number by one unit and lepton number by an odd number of units.

The first paper included in this thesis explores some of the simplest possible extensions of the standard model in which baryon number is violated, but the proton does not decay as a result. The second paper extends this analysis to explore models in which baryon number is conserved, but lepton flavor violation is present. Special attention is given to the processes of μ to e conversion and μ → eγ which are bound by existing experimental limits and relevant to future experiments.

The final two papers explore extensions of the minimal supersymmetric standard model (MSSM) in which both baryon number and lepton number, or the combination B-L, are elevated to the status of being spontaneously broken local symmetries. These models have a rich phenomenology including new collider signatures, stable dark matter candidates, and alternatives to the discrete R-parity symmetry usually built into the MSSM in order to protect against baryon and lepton number violating processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES]El documento que se presenta tiene como finalidad realizar la certificación energética de una vivienda perteneciente a un bloque residencial situado en Leioa. Además, se estudian diferentes medidas para obtener un mejor comportamiento térmico y se analiza la viabilidad económica de las mismas

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES]Este trabajo tiene como objetivos realizar la auditoria energética de un edificio de viviendas y planificar un proyecto de posibles mejoras para aumentar la nota obtenida. La certificación se ha llevado a cabo mediante el programa CE3X, programa homologado por el Ministerio de Industria. La calificación del edificio ha sido 89.8 G y tras implantar las mejoras se ha conseguido una puntuación de 14.8 C, de acuerdo con la actual normativa europea. También, se adjuntan varias imágenes sobre el funcionamiento del programa CE3X además del certificado real obtenido. La planificación y el presupuesto de las mejoras están incluidos en este documento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EU]Gaur egun, 2010/31/UE legearen arabera eraikinen ziurtagiri energetikoa derrigorrezkoa da izatea eraiki, alokatu eta saldu nahi diren etxebizitzentzako. Ziurtagiriarekin, etxebizitzak ingurugiroa zenbat kutsatzen duen adierazten du letren arabera, hau da, zenbat CO2 igortzen duen. Lan honetan, CE3X programa erabili da kalifikazio energetikoa lortzeko horretarako, etxebizitzaren datuak, inguratzaile termikoaren datuak, instalazioen karakteristikak eta kolektore termikoen datuak sartu dira. Honekin, E letra lortu da baina, fatxadan isolamendua kanpotik jarriz eta kondentsazio galdara bat jarriz kalifikazioa C letraraino hobetu egin da. Ondorioz, aplikatutako neurriak onak dira letra asko hobetu baita eta eraikina efizienteagoa bihurtu baita.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding the origin of life on Earth has long fascinated the minds of the global community, and has been a driving factor in interdisciplinary research for centuries. Beyond the pioneering work of Darwin, perhaps the most widely known study in the last century is that of Miller and Urey, who examined the possibility of the formation of prebiotic chemical precursors on the primordial Earth [1]. More recent studies have shown that amino acids, the chemical building blocks of the biopolymers that comprise life as we know it on Earth, are present in meteoritic samples, and that the molecules extracted from the meteorites display isotopic signatures indicative of an extraterrestrial origin [2]. The most recent major discovery in this area has been the detection of glycine (NH2CH2COOH), the simplest amino acid, in pristine cometary samples returned by the NASA STARDUST mission [3]. Indeed, the open questions left by these discoveries, both in the public and scientific communities, hold such fascination that NASA has designated the understanding of our "Cosmic Origins" as a key mission priority.

Despite these exciting discoveries, our understanding of the chemical and physical pathways to the formation of prebiotic molecules is woefully incomplete. This is largely because we do not yet fully understand how the interplay between grain-surface and sub-surface ice reactions and the gas-phase affects astrophysical chemical evolution, and our knowledge of chemical inventories in these regions is incomplete. The research presented here aims to directly address both these issues, so that future work to understand the formation of prebiotic molecules has a solid foundation from which to work.

From an observational standpoint, a dedicated campaign to identify hydroxylamine (NH2OH), potentially a direct precursor to glycine, in the gas-phase was undertaken. No trace of NH2OH was found. These observations motivated a refinement of the chemical models of glycine formation, and have largely ruled out a gas-phase route to the synthesis of the simplest amino acid in the ISM. A molecular mystery in the case of the carrier of a series of transitions was resolved using observational data toward a large number of sources, confirming the identity of this important carbon-chemistry intermediate B11244 as l-C3H+ and identifying it in at least two new environments. Finally, the doubly-nitrogenated molecule carbodiimide HNCNH was identified in the ISM for the first time through maser emission features in the centimeter-wavelength regime.

In the laboratory, a TeraHertz Time-Domain Spectrometer was constructed to obtain the experimental spectra necessary to search for solid-phase species in the ISM in the THz region of the spectrum. These investigations have shown a striking dependence on large-scale, long-range (i.e. lattice) structure of the ices on the spectra they present in the THz. A database of molecular spectra has been started, and both the simplest and most abundant ice species, which have already been identified, as well as a number of more complex species, have been studied. The exquisite sensitivity of the THz spectra to both the structure and thermal history of these ices may lead to better probes of complex chemical and dynamical evolution in interstellar environments.