975 resultados para Real samples


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A lo largo de este documento, se va a explicar la implantación del proyecto que he realizado basado en la localización de vehículos en la fábrica de Mercedes Benz España situada en Vitoria-Gasteiz. Durante la realización de este proyecto, se han llevado a cabo diversos estudios con el fin de conseguir la correcta implantación de las tecnologías empleadas. Se han realizado diferentes alternativas de posicionamiento de los componentes y diversas pruebas para comprobar el correcto funcionamiento de la solución. La solución del proyecto se realizará en distintas fases. La primera de ellas tratará sobre el estudio en una determinada zona de la fábrica, más concretamente la denominada “Área Técnica”, en esta zona se encuentran los vehículos que sufren algún retoque una vez están montados, esta zona se utilizará como piloto para una vez finalizado y comprobado su éxito ampliar la solución al resto de zonas. Previamente a mi incorporación se realizó un estudio para la colocación de los elementos necesarios en esta zona y se ha visto las posibilidades y beneficios que aportaría el control de los vehículos dentro de la fábrica. La siguiente fase será implantar la solución en el resto de las áreas que se encuentran dentro de la fábrica de Vitoria-Gasteiz así como la instalación de unos dispositivos que estarán ubicados en las puertas. Estos ayudarán a mejorar la ubicación de los vehículos ya que podremos conocer si los vehículos se encuentran dentro o fuera de la fábrica. Finalmente se ha realizado la integración de la solución en los sistemas actuales que utilizan en la fábrica para la gestión de los vehículos durante su ciclo de vida.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A time-domain spectrometer for use in the terahertz (THz) spectral range was designed and constructed. Due to there being few existing methods of generating and detecting THz radiation, the spectrometer is expected to have vast applications to solid, liquid, and gas phase samples. In particular, knowledge of complex organic chemistry and chemical abundances in the interstellar medium (ISM) can be obtained when compared to astronomical data. The THz spectral region is of particular interest due to reduced line density when compared to the millimeter wave spectrum, the existence of high resolution observatories, and potentially strong transitions resulting from the lowest-lying vibrational modes of large molecules.

The heart of the THz time-domain spectrometer (THz-TDS) is the ultrafast laser. Due to the femtosecond duration of ultrafast laser pulses and an energy-time uncertainty relationship, the pulses typically have a several-THz bandwidth. By various means of optical rectification, the optical pulse carrier envelope shape, i.e. intensity-time profile, can be transferred to the phase of the resulting THz pulse. As a consequence, optical pump-THz probe spectroscopy is readily achieved, as was demonstrated in studies of dye-sensitized TiO2, as discussed in chapter 4. Detection of the terahertz radiation is commonly based on electro-optic sampling and provides full phase information. This allows for accurate determination of both the real and imaginary index of refraction, the so-called optical constants, without additional analysis. A suite of amino acids and sugars, all of which have been found in meteorites, were studied in crystalline form embedded in a polyethylene matrix. As the temperature was varied between 10 and 310 K, various strong vibrational modes were found to shift in spectral intensity and frequency. Such modes can be attributed to intramolecular, intermolecular, or phonon modes, or to some combination of the three.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The studies reported were undertaken as part of a wide environmental feasibility study for the establishment of a modern sewage system in Freetown. The aim of this part of the study was to determine whether the hydrological regime of the Sierra Leone River Estuary would permit the large-scale introduction of sewage into the estuary without damaging the environment. The important factors were whether: 1) there would be sufficient dilution of the sewage; 2) fleatable particles or other substances would create significant adverse effects in the estuarine ecosystem. The outfall sites are described together with the sampling stations, methods and analyses. Results include: 1) T/S profiles; 2) chemical analysis of the water. A review of literature on the Sierra Leone River Estuary is included which provides information on the plankton, benthos and fisheries. Results suggest that at certain points where local circulations occur it would be inadvisable to locate untreated sewage outfalls. Such points are frequently observed in small embayments. These studies have been of short duration but the data can serve as baseline for more extended investigations which would give a more complete picture of the seasonal patterns in the estuary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hypervelocity impact of meteoroids and orbital debris poses a serious and growing threat to spacecraft. To study hypervelocity impact phenomena, a comprehensive ensemble of real-time concurrently operated diagnostics has been developed and implemented in the Small Particle Hypervelocity Impact Range (SPHIR) facility. This suite of simultaneously operated instrumentation provides multiple complementary measurements that facilitate the characterization of many impact phenomena in a single experiment. The investigation of hypervelocity impact phenomena described in this work focuses on normal impacts of 1.8 mm nylon 6/6 cylinder projectiles and variable thickness aluminum targets. The SPHIR facility two-stage light-gas gun is capable of routinely launching 5.5 mg nylon impactors to speeds of 5 to 7 km/s. Refinement of legacy SPHIR operation procedures and the investigation of first-stage pressure have improved the velocity performance of the facility, resulting in an increase in average impact velocity of at least 0.57 km/s. Results for the perforation area indicate the considered range of target thicknesses represent multiple regimes describing the non-monotonic scaling of target perforation with decreasing target thickness. The laser side-lighting (LSL) system has been developed to provide ultra-high-speed shadowgraph images of the impact event. This novel optical technique is demonstrated to characterize the propagation velocity and two-dimensional optical density of impact-generated debris clouds. Additionally, a debris capture system is located behind the target during every experiment to provide complementary information regarding the trajectory distribution and penetration depth of individual debris particles. The utilization of a coherent, collimated illumination source in the LSL system facilitates the simultaneous measurement of impact phenomena with near-IR and UV-vis spectrograph systems. Comparison of LSL images to concurrent IR results indicates two distinctly different phenomena. A high-speed, pressure-dependent IR-emitting cloud is observed in experiments to expand at velocities much higher than the debris and ejecta phenomena observed using the LSL system. In double-plate target configurations, this phenomena is observed to interact with the rear-wall several micro-seconds before the subsequent arrival of the debris cloud. Additionally, dimensional analysis presented by Whitham for blast waves is shown to describe the pressure-dependent radial expansion of the observed IR-emitting phenomena. Although this work focuses on a single hypervelocity impact configuration, the diagnostic capabilities and techniques described can be used with a wide variety of impactors, materials, and geometries to investigate any number of engineering and scientific problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

27 p.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo deste trabalho foi avaliar se a expressão do colágeno tipo I, III e metaloproteinase podem estar relacionadas com o grau de Gleason, estadio patológico e PSA pré-operatório, e se isto poderia servir como prognóstico de doença. O grupo de estudo incluiu espécimes de prostatectomia radical de 33 pacientes com adenocarcinoma submetidos à cirurgia no período de 2001 a 2009. Os pacientes foram divididos em 3 grupos: grau de Gleason = 6 (13 pacientes), escore de Gleason = 7 (10 pacientes), escore de Gleason ≥ 8 (10 pacientes). O tecido prostático benigno adjacente à area de câncer nos graus de Gleason foi utilizado como grupo controle. As áreas de adenocarcinoma e de tecido benigno foram selecionados sob análise microscópica e processados para colágeno I e III sob análise do gene por PCR em Tempo Real. Dez seções desparafinadas de cada grupo foram utilizados para avaliar o colágeno I, III e a imunoexpressão de metaloproteinase. Os resultados foram relacionados com o grau de Gleason, PSA pré-operatório e estadio patológico. Apesar da diferença significativa na expressão gênica de ambos colágeno I e III entre as áreas de tecido prostático benigno e tumor nas amostras de próstata Gleason = 6 (colágeno I = 0,4 0,2 vs 5 2,4, p<0,05; colágeno III = 0,2 0,06 vs 0,7 0,1, p<0,05) e grau de Gleason ≥ 8 (I = 8 3,4 vs 1,4 0,8, p<0,05; colágeno III = 1,8 0,5 vs 0,6 0,1, p<0,05), não houve correlação com grau de Gleason, PSA pré-operatório ou estadio patológico. Houve uma correlação positiva entre a expressão de metaloproteinases e grau de Gleason (r2 = 0,47). Concluindo, tem-se que a correlação positiva entre a expressão de metaloproteinases e o grau de Gleason sugere que a metaloproteinase pode ser um fator promissor para melhorar o grau de Gleason. Sua expressão e regulação não parecem estar relacionados com a degradação do colágeno. Não há correlação entre expressão de colágeno e grau de Gleason, nem a nível gênico nem protéico.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The laminar to turbulent transition process in boundary layer flows in thermochemical nonequilibrium at high enthalpy is measured and characterized. Experiments are performed in the T5 Hypervelocity Reflected Shock Tunnel at Caltech, using a 1 m length 5-degree half angle axisymmetric cone instrumented with 80 fast-response annular thermocouples, complemented by boundary layer stability computations using the STABL software suite. A new mixing tank is added to the shock tube fill apparatus for premixed freestream gas experiments, and a new cleaning procedure results in more consistent transition measurements. Transition location is nondimensionalized using a scaling with the boundary layer thickness, which is correlated with the acoustic properties of the boundary layer, and compared with parabolized stability equation (PSE) analysis. In these nondimensionalized terms, transition delay with increasing CO2 concentration is observed: tests in 100% and 50% CO2, by mass, transition up to 25% and 15% later, respectively, than air experiments. These results are consistent with previous work indicating that CO2 molecules at elevated temperatures absorb acoustic instabilities in the MHz range, which is the expected frequency of the Mack second-mode instability at these conditions, and also consistent with predictions from PSE analysis. A strong unit Reynolds number effect is observed, which is believed to arise from tunnel noise. NTr for air from 5.4 to 13.2 is computed, substantially higher than previously reported for noisy facilities. Time- and spatially-resolved heat transfer traces are used to track the propagation of turbulent spots, and convection rates at 90%, 76%, and 63% of the boundary layer edge velocity, respectively, are observed for the leading edge, centroid, and trailing edge of the spots. A model constructed with these spot propagation parameters is used to infer spot generation rates from measured transition onset to completion distance. Finally, a novel method to control transition location with boundary layer gas injection is investigated. An appropriate porous-metal injector section for the cone is designed and fabricated, and the efficacy of injected CO2 for delaying transition is gauged at various mass flow rates, and compared with both no injection and chemically inert argon injection cases. While CO2 injection seems to delay transition, and argon injection seems to promote it, the experimental results are inconclusive and matching computations do not predict a reduction in N factor from any CO2 injection condition computed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O principal objetivo da tese está relacionado ao tratamento de águas oleosas através da eletrofloculação utilizando corrente alternada de frequência variável, o qual procurou explorar as potencialidades desta técnica. Com a crescente demanda por petróleo e seus derivados, é cada vez maior a produção dessas águas residuárias que, antes de serem descartadas, precisam ser submetidas a tratamento que satisfaçam aos requisitos legais. O trabalho apresentado descreve o levantamento bibliográfico e os resultados dos ensaios realizados, empregando o tratamento de eletrofloculação com a finalidade de remover as substâncias consideradas poluentes presentes nestes efluentes. O processo de eletrofloculação foi testado tanto para o tratamento em corrente continua quanto em corrente alternada de frequência variável em efluentes sintéticos e reais de alta salinidade, contendo teores elevados de óleos e graxas, turbidez e cor. Eficiências de redução de 99% para óleos e graxas, cor e turbidez foram obtidos utilizando eletrodos de alumínio. O processo de eletrofloculação demonstrou-se bastante vantajoso em função da alta condutividade que permite o tratamento com menor consumo energético. A tecnologia de eletrofloculação com corrente alternada quando comparada com a tecnologia de corrente contínua se mostrou muito eficiente em relação a economia no desgaste de massa de eletrodos, o que, dependendo do tempo de aplicação da corrente elétrica nas mesmas condições de estudo, houve redução de mais da metade do consumo.