974 resultados para Conventional methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The usefulness of cosmogenic beryllium-10 (half life = 2.5 Ma) for studying the rates of accumulation of ferromanganese nodules is reported based on its measured depth distribution in the top 20 mm of these deposits. Accumulation rates have been obtained in the range of 1 to 4 mm/Ma, which are in good agreement with rates determined using the 230Th method on the same nodules. The use of 10Be offers promise in extending the dating to the outer few cm of the nodules. This contrasts with conventional methods using 230Th and 231Pa isotopes which, due to their comparatively short half lives, are limited to a few mm at the surface of the nodules. Detailed studies of 10Be in the manganese deposits coupled with other trace element analyses should prove valuable in understanding the processes of formation of these deposits and the chronology of events recorded by them.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The first objective of this research was to develop closed-form and numerical probabilistic methods of analysis that can be applied to otherwise conventional methods of unreinforced and geosynthetic reinforced slopes and walls. These probabilistic methods explicitly include random variability of soil and reinforcement, spatial variability of the soil, and cross-correlation between soil input parameters on probability of failure. The quantitative impact of simultaneously considering the influence of random and/or spatial variability in soil properties in combination with cross-correlation in soil properties is investigated for the first time in the research literature. Depending on the magnitude of these statistical descriptors, margins of safety based on conventional notions of safety may be very different from margins of safety expressed in terms of probability of failure (or reliability index). The thesis work also shows that intuitive notions of margin of safety using conventional factor of safety and probability of failure can be brought into alignment when cross-correlation between soil properties is considered in a rigorous manner. The second objective of this thesis work was to develop a general closed-form solution to compute the true probability of failure (or reliability index) of a simple linear limit state function with one load term and one resistance term expressed first in general probabilistic terms and then migrated to a LRFD format for the purpose of LRFD calibration. The formulation considers contributions to probability of failure due to model type, uncertainty in bias values, bias dependencies, uncertainty in estimates of nominal values for correlated and uncorrelated load and resistance terms, and average margin of safety expressed as the operational factor of safety (OFS). Bias is defined as the ratio of measured to predicted value. Parametric analyses were carried out to show that ignoring possible correlations between random variables can lead to conservative (safe) values of resistance factor in some cases and in other cases to non-conservative (unsafe) values. Example LRFD calibrations were carried out using different load and resistance models for the pullout internal stability limit state of steel strip and geosynthetic reinforced soil walls together with matching bias data reported in the literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In geotechnical engineering, the stability of rock excavations and walls is estimated by using tools that include a map of the orientations of exposed rock faces. However, measuring these orientations by using conventional methods can be time consuming, sometimes dangerous, and is limited to regions of the exposed rock that are reachable by a human. This thesis introduces a 2D, simulated, quadcopter-based rock wall mapping algorithm for GPS denied environments such as underground mines or near high walls on surface. The proposed algorithm employs techniques from the field of robotics known as simultaneous localization and mapping (SLAM) and is a step towards 3D rock wall mapping. Not only are quadcopters agile, but they can hover. This is very useful for confined spaces such as underground or near rock walls. The quadcopter requires sensors to enable self localization and mapping in dark, confined and GPS denied environments. However, these sensors are limited by the quadcopter payload and power restrictions. Because of these restrictions, a light weight 2D laser scanner is proposed. As a first step towards a 3D mapping algorithm, this thesis proposes a simplified scenario in which a simulated 1D laser range finder and 2D IMU are mounted on a quadcopter that is moving on a plane. Because the 1D laser does not provide enough information to map the 2D world from a single measurement, many measurements are combined over the trajectory of the quadcopter. Least Squares Optimization (LSO) is used to optimize the estimated trajectory and rock face for all data collected over the length of a light. Simulation results show that the mapping algorithm developed is a good first step. It shows that by combining measurements over a trajectory, the scanned rock face can be estimated using a lower-dimensional range sensor. A swathing manoeuvre is introduced as a way to promote loop closures within a short time period, thus reducing accumulated error. Some suggestions on how to improve the algorithm are also provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we consider the secure beamforming design for an underlay cognitive radio multiple-input singleoutput broadcast channel in the presence of multiple passive eavesdroppers. Our goal is to design a jamming noise (JN) transmit strategy to maximize the secrecy rate of the secondary system. By utilizing the zero-forcing method to eliminate the interference caused by JN to the secondary user, we study the joint optimization of the information and JN beamforming for secrecy rate maximization of the secondary system while satisfying all the interference power constraints at the primary users, as well as the per-antenna power constraint at the secondary transmitter. For an optimal beamforming design, the original problem is a nonconvex program, which can be reformulated as a convex program by applying the rank relaxation method. To this end, we prove that the rank relaxation is tight and propose a barrier interior-point method to solve the resulting saddle point problem based on a duality result. To find the global optimal solution, we transform the considered problem into an unconstrained optimization problem. We then employ Broyden-Fletcher-Goldfarb-Shanno (BFGS) method to solve the resulting unconstrained problem which helps reduce the complexity significantly, compared to conventional methods. Simulation results show the fast convergence of the proposed algorithm and substantial performance improvements over existing approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The article is focused on analysis of global efficiency of new mold for rotational molding of plastic parts, being directly heated by thermal fluid. The overall efficiency is based on several items such as reduction of cycle time, better uniformity of heating-cooling and low energy consumption. The new tool takes advantage of additive fabrication and electroforming for making the optimal manifold and cavity shell of the mold. Experimental test of a prototype mold was carried out on an experimental rotational molding machine, developed for this purpose, measuring wall temperature, and internal air temperature, with and without plastic material inside. Results were compared with conventional mold heated into an oven and to theoretical simulations done by Computational Fluid Dynamic software (CFD). The analysis represents considerable improvement of cycle time related to conventional methods (heated by oven) and better thermal uniformity to conventional procedures by direct heating of oil with external channels. In addition to thermal analysis an energetic efficiency study was done. POLYM. ENG. SCI., 52:1998-2005, 2012. © 2012 Society of Plastics Engineers Copyright © 2012 Society of Plastics Engineers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Eine effiziente Gestaltung von Materialbereitstellungsprozessen ist eine entscheidende Voraussetzung für die Sicherstellung einer hohen Verfügbarkeit von Materialien in der Montage. Die Auswahl adäquater Bereitstellungsstrategien muss sich stets an den Anforderungen des Materialbereitstellungsprozesses orientieren. Die Leistungsanforderungen an eine effektive Materialbereitstellung werden maßgeblich durch den Montageprozess determiniert. Diesen Leistungsanforderungen ist eine passgenaue Materialbereitstellungsstrategie gegenüberzustellen. Die Formulierung der Leistungsanforderungen kann dabei in qualitativer oder quantitativer Form erfolgen. Allein die Berücksichtigung quantitativer Daten ist unzureichend, denn häufig liegen zum Zeitpunkt der Planung weder belastbare quantitative Daten vor, noch erscheint der Aufwand zu deren Ermittlung angemessen. Zudem weisen die herkömmlichen Methoden, die im Rahmen der Auswahl von Materialbereitstellungsstrategien häufig eingesetzt werden, den Nachteil auf, dass eine Nichterfüllung einer bestimmten Leistungsanforderung durch eine besonders gute Erfüllung einer anderen Leistungsanforderung kompensiert werden kann (Zeit vs. Qualität). Um die Auswahl einer Materialbereitstellungsstrategie unter Berücksichtigung qualitativer und quantitativer Anforderungen durchführen zu können, eignet sich in besonderer Weise die Methode des Fuzzy Axiomatic Designs. Diese Methode erlaubt einen Abgleich von Anforderungen an den Materialbereitstellungsprozess und der Eignung unterschiedlicher Materialbereitstellungsstrategien.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Market research is often conducted through conventional methods such as surveys, focus groups and interviews. But the drawbacks of these methods are that they can be costly and timeconsuming. This study develops a new method, based on a combination of standard techniques like sentiment analysis and normalisation, to conduct market research in a manner that is free and quick. The method can be used in many application-areas, but this study focuses mainly on the veganism market to identify vegan food preferences in the form of a profile. Several food words are identified, along with their distribution between positive and negative sentiments in the profile. Surprisingly, non-vegan foods such as cheese, cake, milk, pizza and chicken dominate the profile, indicating that there is a significant market for vegan-suitable alternatives for such foods. Meanwhile, vegan-suitable foods such as coconut, potato, blueberries, kale and tofu also make strong appearances in the profile. Validation is performed by using the method on Volkswagen vehicle data to identify positive and negative sentiment across five car models. Some results were found to be consistent with sales figures and expert reviews, while others were inconsistent. The reliability of the method is therefore questionable, so the results should be used with caution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les méthodes classiques d’analyse de survie notamment la méthode non paramétrique de Kaplan et Meier (1958) supposent l’indépendance entre les variables d’intérêt et de censure. Mais, cette hypothèse d’indépendance n’étant pas toujours soutenable, plusieurs auteurs ont élaboré des méthodes pour prendre en compte la dépendance. La plupart de ces méthodes émettent des hypothèses sur cette dépendance. Dans ce mémoire, nous avons proposé une méthode d’estimation de la dépendance en présence de censure dépendante qui utilise le copula-graphic estimator pour les copules archimédiennes (Rivest etWells, 2001) et suppose la connaissance de la distribution de la variable de censure. Nous avons ensuite étudié la consistance de cet estimateur à travers des simulations avant de l’appliquer sur un jeu de données réelles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Atualmente vivemos numa era em que a publicidade nos rodeia através de várias formas e onde as empresas esforçam-se cada vez mais para tornar eficaz a mensagem que pretendem passar. O uso de métodos convencionais, como a televisão, rádio, ou até outdoors, está a tornar-se pouco eficaz. Em muito pouco tempo, nos últimos vinte anos, a Internet mudou a nossa forma de viver, sendo até comparado ao Renascimento e à Revolução Industrial. As gerações mais recentes nasceram rodeadas deste “boom” publicitário, o que as tornou imunes. De forma a contornar este problema, surge Levinson em 1989 onde apresenta uma forma de minimizar este efeito e ao mesmo tempo proporcionar a que pequenas empresas tenham capacidade de competir com as maiores (Levinson, 2007). Assim, o marketing de guerrilha caracteriza-se por estar normalmente associado a implementações de baixo custo, que por vezes são irrepetíveis, pois conseguem alcançar um impacto “wow” significativo junto do grande público (Oliveira & Ferreira, 2013). O presente estudo contribui para a literatura do marketing de guerrilha existente, realizando assim uma compilação acerca do desenvolvimento desta temática até aos dias de hoje. De forma a perceber quais são os fatores que influenciam o uso do marketing de guerrilha pelas empresas portuguesas, foram inquiridas 140 empresas de todo o país, através de um questionário com base no estudo desenvolvido por Overbeek (2012). Através desta investigação exploratória, numa área ainda pouco explorada em Portugal, até à data, em especial a nível académico, “verificou-se que existe uma grande procura por este tipo de ferramentas não convencionais, tanto que, verificou-se que 86,4% da amostra já presenciou uma ação de guerrilha, no entanto apenas 36,4% admite já ter implementado na sua empresa, o que levanta a questão do porquê de uma taxa tão reduzida de utilização deste tipo de abordagem não convencional (Almeida & Au-Yong-Oliveira, 2015, p.1). A explicação poderá estar ligada à grande aversão à incerteza que existe em Portugal (Hofstede, 2001), e ao receio da mudança e da experimentação de novos produtos em Portugal (Steenkamp et al., 1999). Fatores que não irão mudar durante décadas, dado o tempo que demora a mudar culturas nacionais (Hofstede, 2001). Verifica-se também que na amostra das 140 empresas se destacam pessoas formadas (ao grau de licenciatura e mestrado) em Marketing (18,7% da amostra), Design (15,7%), Gestão (10,4%) e Tecnologias da Informação e Comunicação (7,9%). Pode-se concluir que são as quatro áreas fundamentais, ou pelo menos a necessidade existe em ter conhecimento nestas quatro áreas atualmente. Devido à [pequena] dimensão das empresas, um colaborador que tenha estas quatro competências tem uma vantagem competitiva face aos restantes, no que toca a hard skills.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A crescente urbanização global tem como consequência o aumento dos níveis de poluentes na atmosfera e a respetiva deterioração da qualidade do ar. O controlo da poluição atmosférica e monitorização da qualidade do ar são passos fundamentais para implementar estratégias de redução e estimular a consciência ambiental dos cidadãos. Com este intuito, existem várias técnicas e tecnologias que podem ser usadas para monitorizar a qualidade do ar. A utilização de microsensores surge como uma ferramenta inovadora para a monitorização da qualidade do ar. E, apesar dos desempenhos dos microsensores permitirem uma nova estratégia, resultando em respostas rápidas, baixos custos operacionais e eficiências elevadas, que não podem ser alcançados apenas com abordagens convencionais, ainda é necessário aprofundar o conhecimento a fim de integrar estas novas tecnologias, particularmente quanto à verificação do desempenho dos sensores comparativamente aos métodos de referência em campanhas experimentais. Esta dissertação, desenvolvida no Instituto do Ambidente e Desenvolvimento em forma de estágio, teve como objetivo a avaliação do desempenho de sensores de baixo custo comparativamente com os métodos de referência, tendo como base uma campanha de monitorização da qualidade do ar realizada no centro de Aveiro durante 2 semanas de outubro de 2014. De forma mais específica pretende-se perceber até que ponto se podem utilizar sensores de baixo custo que cumpram os requisitos especificados na legislação e as especificidades das normas, estabelecendo assim um protocolo de avaliação de microsensores. O trabalho realizado passou ainda pela caracterização da qualidade do ar no centro de Aveiro para o período da campanha de monitorização. A aplicação de microsensores eletroquímicos, MOS e OPC em paralelo com equipamento de referência neste estudo de campo permitiu avaliar a fiabilidade e a incerteza destas novas tecnologias de monitorização. Com este trabalho verificou-se que os microsensores eletroquímicos são mais precisos comparativamente aos microsensores baseados em óxidos metálicos, apresentando correlações fortes com os métodos de referência para diversos poluentes. Por sua vez, os resultados obtidos pelos contadores óticos de partículas foram satisfatórios, contudo poderiam ser melhorados quer pelo modo de amostragem, quer pelo método de tratamento de dados aplicado. Idealmente, os microsensores deveriam apresentar fortes correlações com o método de referência e elevada eficiência de recolha de dados. No entanto, foram identificados alguns problemas na eficiência de recolha de dados dos sensores que podem estar relacionados com a humidade relativa e temperaturas elevadas durante a campanha, falhas de comunicação intermitentes e, também, a instabilidade e reatividade causada por gases interferentes. Quando as limitações das tecnologias de sensores forem superadas e os procedimentos adequados de garantia e controlo de qualidade possam ser cumpridos, os sensores de baixo custo têm um grande potencial para permitir a monitorização da qualidade do ar com uma elevada cobertura espacial, sendo principalmente benéfico em áreas urbanas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A crescente urbanização global tem como consequência o aumento dos níveis de poluentes na atmosfera e a respetiva deterioração da qualidade do ar. O controlo da poluição atmosférica e monitorização da qualidade do ar são passos fundamentais para implementar estratégias de redução e estimular a consciência ambiental dos cidadãos. Com este intuito, existem várias técnicas e tecnologias que podem ser usadas para monitorizar a qualidade do ar. A utilização de microsensores surge como uma ferramenta inovadora para a monitorização da qualidade do ar. E, apesar dos desempenhos dos microsensores permitirem uma nova estratégia, resultando em respostas rápidas, baixos custos operacionais e eficiências elevadas, que não podem ser alcançados apenas com abordagens convencionais, ainda é necessário aprofundar o conhecimento a fim de integrar estas novas tecnologias, particularmente quanto à verificação do desempenho dos sensores comparativamente aos métodos de referência em campanhas experimentais. Esta dissertação, desenvolvida no Instituto do Ambidente e Desenvolvimento em forma de estágio, teve como objetivo a avaliação do desempenho de sensores de baixo custo comparativamente com os métodos de referência, tendo como base uma campanha de monitorização da qualidade do ar realizada no centro de Aveiro durante 2 semanas de outubro de 2014. De forma mais específica pretende-se perceber até que ponto se podem utilizar sensores de baixo custo que cumpram os requisitos especificados na legislação e as especificidades das normas, estabelecendo assim um protocolo de avaliação de microsensores. O trabalho realizado passou ainda pela caracterização da qualidade do ar no centro de Aveiro para o período da campanha de monitorização. A aplicação de microsensores eletroquímicos, MOS e OPC em paralelo com equipamento de referência neste estudo de campo permitiu avaliar a fiabilidade e a incerteza destas novas tecnologias de monitorização. Com este trabalho verificou-se que os microsensores eletroquímicos são mais precisos comparativamente aos microsensores baseados em óxidos metálicos, apresentando correlações fortes com os métodos de referência para diversos poluentes. Por sua vez, os resultados obtidos pelos contadores óticos de partículas foram satisfatórios, contudo poderiam ser melhorados quer pelo modo de amostragem, quer pelo método de tratamento de dados aplicado. Idealmente, os microsensores deveriam apresentar fortes correlações com o método de referência e elevada eficiência de recolha de dados. No entanto, foram identificados alguns problemas na eficiência de recolha de dados dos sensores que podem estar relacionados com a humidade relativa e temperaturas elevadas durante a campanha, falhas de comunicação intermitentes e, também, a instabilidade e reatividade causada por gases interferentes. Quando as limitações das tecnologias de sensores forem superadas e os procedimentos adequados de garantia e controlo de qualidade possam ser cumpridos, os sensores de baixo custo têm um grande potencial para permitir a monitorização da qualidade do ar com uma elevada cobertura espacial, sendo principalmente benéfico em áreas urbanas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although a clear correlation between levels of fungi in the air and health impacts has not been shown in epidemiological studies, fungi must be regarded as potential occupational health hazards. Fungi can have an impact on human health in four different ways: (1) they can infect humans, (2) they may act as allergens, (3) they can be toxigenic, or (4) they may cause inflammatory reactions. Fungi of concern in occupational hygiene are mostly non-pathogenic or facultative pathogenic (opportunistic) species, but are relevant as allergens and mycotoxins producers. It is known that the exclusive use of conventional methods for fungal quantification (fungal culture) may underestimate the results due to different reasons. The incubation temperature chosen will not be the most suitable for every fungal species, resulting in the inhibition of some species and the favouring of others. Differences in fungi growth rates may also result in data underestimation, since the fungal species with higher growth rates may inhibit others species’ growth. Finally, underestimated data can result from non-viable fungal particles that may have been collected or fungal species that do not grow in the culture media used, although these species may have clinical relevance in the context. Due to these constraints occupational exposure assessment, in setings with high fungal contamination levels, should follow these steps: Apply conventional methods to obtain fungal load information (air and surfaces) regarding the most critical scenario previously selected; Guideline comparation aplying or legal requirements or suggested limits by scientific and/or technical organizations. We should also compare our results with others from the same setting (if there is any); Select the most suitable indicators for each setting and apply conventional-culture methods and also molecular tools. These methodology will ensure a more real characterization of fungal burden in each setting and, consequently, permits to identify further measures regarding assessment of fungal metabolites, and also a more adequate workers health surveillance. The methodology applied to characterize fungal burden in several occupational environments, focused in Aspergillus spp. prevalence, will be present and discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A indústria de aves brasileira destaca-se economicamente, onde um total de 12,3 milhões de toneladas foi produzido no país em 2013. Esta produção em larga escala gera considerável volume de subprodutos, chegando até 35% da ave viva. Tais resíduos são convertidos, por processos tradicionais, em produtos de baixo valor comercial, como por exemplo, farinhas. O processo de variação de pH constitui um importante processo alternativo de obtenção de proteínas com melhores características funcionais e nutricionais. Estudar as variáveis do processo, efetuando aumento dimensional, é fundamental para aplicação das tecnologias desenvolvidas no laboratório e posterior definição final de processos industriais. A produção de isolados proteicos seria uma tecnologia atraente no aproveitamento de subprodutos da indústria de frango, convertendo-os em uma ótima fonte proteica, agregando valor ao produto obtido. Este trabalho teve por objetivo produzir isolados proteicos em diferentes escalas, utilizando subprodutos não comestíveis da indústria de frango. Foi estudada a solubilização das proteínas da matéria-prima (MP) para definir pHs de solubilização e de precipitação isoelétrica. A curva apontou um pH alcalino de 11,0 para etapa de solubilização e de 5,25 para etapa de precipitação proteica. As proteínas obtidas foram caracterizadas quanto sua composição proximal, índice de acidez (IA), índice de peróxidos (IP) e substâncias reativas ao ácido tiobarbitúrico (TBARS) além de propriedades funcionais de solubilidade, capacidade de retenção de água (CRA) e capacidade de retenção de óleo (CRO); e nutricionais de digestibilidade proteica. Comparativamente foram analisadas farinhas de vísceras comerciais nos mesmos parâmetros. Um aumento de escala do processo foi realizado e avaliado pelas mesmas respostas do produto da escala laboratorial. Foi obtido um teor proteico de 82 e 85% em escala laboratorial e aumento de escala, respectivamente, e também uma redução lipídica de 75%, e de cinzas de 85%, em relação à MP. A composição proximal das farinhas analisadas ficou entre 67-72% para proteína bruta, 17-22% para lipídios e 9-15% para cinzas. O IA, apresentou valores de 2,2 e 3,1 meq/g de isolado e de 1,6 a 2,0 meq/g de farinha. Já para IP, obteve-se valores de 0,003 a 0,005 meq/g de isolado e de 0,002 a 0,049 meq/g de farinha. Os índices de TBARS apontaram valores de 0,081 e 0,214 mg MA/g de isolado e 0,041 a 0,128 mg MA/g de farinha. A solubilidade das proteínas do isolado apontou 84 e 81% em pH 3 e 11 respectivamente e de 5% em pH 5, já para farinhas variaram de 22 a 31% em pH de 3 a 11. A CRA obtida no isolado foi 3,1 a 16,5 g água/g de proteína e de 3,8 a 10,9 g água/g de proteína nas farinhas. A CRO ficou em 4,2 mL de óleo/g de proteína do isolados e 2,6 mL de óleo/g de proteína da farinhas. Os isolados proteicos apresentaram 92 e 95% de digestibilidade das proteínas, em comparação aos 84% das farinhas comerciais. Os índices acumulados e apresentados neste trabalho concluíram que foi possível aumentar a escala do processo de variação de pH, sem perder qualidade nos índices físico-químicos e de digestibilidade proteica.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current space exploration has transpired through the use of chemical rockets, and they have served us well, but they have their limitations. Exploration of the outer solar system, Jupiter and beyond will most likely require a new generation of propulsion system. One potential technology class to provide spacecraft propulsion and power systems involve thermonuclear fusion plasma systems. In this class it is well accepted that d-He3 fusion is the most promising of the fuel candidates for spacecraft applications as the 14.7 MeV protons carry up to 80% of the total fusion power while ‘s have energies less than 4 MeV. The other minor fusion products from secondary d-d reactions consisting of 3He, n, p, and 3H also have energies less than 4 MeV. Furthermore there are two main fusion subsets namely, Magnetic Confinement Fusion devices and Inertial Electrostatic Confinement (or IEC) Fusion devices. Magnetic Confinement Fusion devices are characterized by complex geometries and prohibitive structural mass compromising spacecraft use at this stage of exploration. While generating energy from a lightweight and reliable fusion source is important, another critical issue is harnessing this energy into usable power and/or propulsion. IEC fusion is a method of fusion plasma confinement that uses a series of biased electrodes that accelerate a uniform spherical beam of ions into a hollow cathode typically comprised of a gridded structure with high transparency. The inertia of the imploding ion beam compresses the ions at the center of the cathode increasing the density to the point where fusion occurs. Since the velocity distributions of fusion particles in an IEC are essentially isotropic and carry no net momentum, a means of redirecting the velocity of the particles is necessary to efficiently extract energy and provide power or create thrust. There are classes of advanced fuel fusion reactions where direct-energy conversion based on electrostatically-biased collector plates is impossible due to potential limits, material structure limitations, and IEC geometry. Thermal conversion systems are also inefficient for this application. A method of converting the isotropic IEC into a collimated flow of fusion products solves these issues and allows direct energy conversion. An efficient traveling wave direct energy converter has been proposed and studied by Momota , Shu and further studied by evaluated with numerical simulations by Ishikawa and others. One of the conventional methods of collimating charged particles is to surround the particle source with an applied magnetic channel. Charged particles are trapped and move along the lines of flux. By introducing expanding lines of force gradually along the magnetic channel, the velocity component perpendicular to the lines of force is transferred to the parallel one. However, efficient operation of the IEC requires a null magnetic field at the core of the device. In order to achieve this, Momota and Miley have proposed a pair of magnetic coils anti-parallel to the magnetic channel creating a null hexapole magnetic field region necessary for the IEC fusion core. Numerically, collimation of 300 eV electrons without a stabilization coil was demonstrated to approach 95% at a profile corresponding to Vsolenoid = 20.0V, Ifloating = 2.78A, Isolenoid = 4.05A while collimation of electrons with stabilization coil present was demonstrated to reach 69% at a profile corresponding to Vsolenoid = 7.0V, Istab = 1.1A, Ifloating = 1.1A, Isolenoid = 1.45A. Experimentally, collimation of electrons with stabilization coil present was demonstrated experimentally to be 35% at 100 eV and reach a peak of 39.6% at 50eV with a profile corresponding to Vsolenoid = 7.0V, Istab = 1.1A, Ifloating = 1.1A, Isolenoid = 1.45A and collimation of 300 eV electrons without a stabilization coil was demonstrated to approach 49% at a profile corresponding to Vsolenoid = 20.0V, Ifloating = 2.78A, Isolenoid = 4.05A 6.4% of the 300eV electrons’ initial velocity is directed to the collector plates. The remaining electrons are trapped by the collimator’s magnetic field. These particles oscillate around the null field region several hundred times and eventually escape to the collector plates. At a solenoid voltage profile of 7 Volts, 100 eV electrons are collimated with wall and perpendicular component losses of 31%. Increasing the electron energy beyond 100 eV increases the wall losses by 25% at 300 eV. Ultimately it was determined that a field strength deriving from 9.5 MAT/m would be required to collimate 14.7 MeV fusion protons from d-3He fueled IEC fusion core. The concept of the proton collimator has been proven to be effective to transform an isotropic source into a collimated flow of particles ripe for direct energy conversion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Liquid-solid interactions become important as dimensions approach mciro/nano-scale. This dissertation focuses on liquid-solid interactions in two distinct applications: capillary driven self-assembly of thin foils into 3D structures, and droplet wetting of hydrophobic micropatterned surfaces. The phenomenon of self-assembly of complex structures is common in biological systems. Examples include self-assembly of proteins into macromolecular structures and self-assembly of lipid bilayer membranes. The principles governing this phenomenon have been applied to induce self-assembly of millimeter scale Si thin films into spherical and other 3D structures, which are then integrated into light-trapping photovoltaic (PV) devices. Motivated by this application, we present a generalized analytical study of the self-folding of thin plates into deterministic 3D shapes, through fluid-solid interactions, to be used as PV devices. This study consists of developing a model using beam theory, which incorporates the two competing components — a capillary force that promotes folding and the bending rigidity of the foil that resists folding into a 3D structure. Through an equivalence argument of thin foils of different geometry, an effective folding parameter, which uniquely characterizes the driving force for folding, has been identified. A criterion for spontaneous folding of an arbitrarily shaped 2D foil, based on the effective folding parameter, is thus established. Measurements from experiments using different materials and predictions from the model match well, validating the assumptions used in the analysis. As an alternative to the mechanics model approach, the minimization of the total free energy is employed to investigate the interactions between a fluid droplet and a flexible thin film. A 2D energy functional is proposed, comprising the surface energy of the fluid, bending energy of the thin film and gravitational energy of the fluid. Through simulations with Surface Evolver, the shapes of the droplet and the thin film at equilibrium are obtained. A critical thin film length necessary for complete enclosure of the fluid droplet, and hence successful self-assembly into a PV device, is determined and compared with the experimental results and mechanics model predictions. The results from the modeling and energy approaches and the experiments are all consistent. Superhydrophobic surfaces, which have unique properties including self-cleaning and water repelling are desired in many applications. One excellent example in nature is the lotus leaf. To fabricate these surfaces, well designed micro/nano- surface structures are often employed. In this research, we fabricate superhydrophobic micropatterned Polydimethylsiloxane (PDMS) surfaces composed of micropillars of various sizes and arrangements by means of soft lithography. Both anisotropic surfaces, consisting of parallel grooves and cylindrical pillars in rectangular lattices, and isotropic surfaces, consisting of cylindrical pillars in square and hexagonal lattices, are considered. A novel technique is proposed to image the contact line (CL) of the droplet on the hydrophobic surface. This technique provides a new approach to distinguish between partial and complete wetting. The contact area between droplet and microtextured surface is then measured for a droplet in the Cassie state, which is a state of partial wetting. The results show that although the droplet is in the Cassie state, the contact area does not necessarily follow Cassie model predictions. Moreover, the CL is not circular, and is affected by the micropatterns, in both isotropic and anisotropic cases. Thus, it is suggested that along with the contact angle — the typical parameter reported in literature quantifying wetting, the size and shape of the contact area should also be presented. This technique is employed to investigate the evolution of the CL on a hydrophobic micropatterned surface in the cases of: a single droplet impacting the micropatterned surface, two droplets coalescing on micropillars, and a receding droplet resting on the micropatterned surface. Another parameter which quantifies hydrophobicity is the contact angle hysteresis (CAH), which indicates the resistance of the surface to the sliding of a droplet with a given volume. The conventional methods of using advancing and receding angles or tilting stage to measure the resistance of the micropatterned surface are indirect, without mentioning the inaccuracy due to the discrete and stepwise motion of the CL on micropillars. A micronewton force sensor is utilized to directly measure the resisting force by dragging a droplet on a microtextured surface. Together with the proposed imaging technique, the evolution of the CL during sliding is also explored. It is found that, at the onset of sliding, the CL behaves as a linear elastic solid with a constant stiffness. Afterwards, the force first increases and then decreases and reaches a steady state, accompanied with periodic oscillations due to regular pinning and depinning of the CL. Both the maximum and steady state forces are primarily dependent on area fractions of the micropatterned surfaces in our experiment. The resisting force is found to be proportional to the number of pillars which pin the CL at the trailing edge, validating the assumption that the resistance mainly arises from the CL pinning at the trailing edge. In each pinning-and-depinning cycle during the steady state, the CL also shows linear elastic behavior but with a lower stiffness. The force variation and energy dissipation involved can also be determined. This novel method of measuring the resistance of the micropatterned surface elucidates the dependence on CL pinning and provides more insight into the mechanisms of CAH.