903 resultados para Advanced application and branching systems
Resumo:
IDENTIFICACIÓN DEL PROBLEMA DE ESTUDIO. Las sustancias orgánicas solubles en agua no biodegradables tales como ciertos herbicidas, colorantes industriales y metabolitos de fármacos de uso masivo son una de las principales fuentes de contaminación en aguas subterráneas de zonas agrícolas y en efluentes industriales y domésticos. Las reacciones fotocatalizadas por irradiación UV-visible y sensitizadores orgánicos e inorgánicos son uno de los métodos más económicos y convenientes para la descomposición de contaminantes en subproductos inocuos y/o biodegradables. En muchas aplicaciones es deseable un alto grado de especificidad, efectividad y velocidad de degradación de un dado agente contaminante que se encuentra presente en una mezcla compleja de sustancias orgánicas en solución. En particular son altamente deseables sistemas nano/micro -particulados que formen suspensiones acuosas estables debido a que estas permiten una fácil aplicación y una eficaz acción descontaminante en grandes volúmenes de fluidos. HIPÓTESIS Y PLANTEO DE LOS OBJETIVOS. El objetivo general de este proyecto es desarrollar sistemas nano/micro particulados formados por polímeros de impresión molecular (PIMs) y foto-sensibilizadores (FS). Un PIMs es un polímero especialmente sintetizado para que sea capaz de reconocer específicamente un analito (molécula plantilla) determinado. La actividad de unión específica de los PIMs en conjunto con la capacidad fotocatalizadora de los sensibilizadores pueden ser usadas para lograr la fotodescomposición específica de moléculas “plantilla” (en este caso un dado contaminante) en soluciones conteniendo mezclas complejas de sustancias orgánicas. MATERIALES Y MÉTODOS A UTILIZAR. Se utilizaran técnicas de polimerización en mini-emulsión para sintetizar los sistemas nano/micro PIM-FS para buscar la degradación de ciertos compuestos de interés. Para caracterizar eficiencias, mecanismos y especificidad de foto-degradación en dichos sistemas se utilizan diversas técnicas espectroscópicas (estacionarias y resueltas en el tiempo) y de cromatografía (HPLC y GC). Así mismo, para medir directamente distribuciones de afinidades de unión y eficiencia de foto-degradación se utilizaran técnicas de fluorescencia de molécula/partícula individual. Estas determinaciones permitirán obtener resultados importantes al momento de analizar los factores que afectan la eficiencia de foto-degradación (nano/micro escala), tales como cantidad y ubicación de foto- sensibilizadores en las matrices poliméricas y eficiencia de unión de la plantilla y los productos de degradación al PIM. RESULTADOS ESPERADOS. Los estudios propuestos apuntan a un mejor entendimiento de procesos foto-iniciados en entornos nano/micro-particulados para aplicar dichos conocimientos al diseño de sistemas optimizados para la foto-destrucción selectiva de contaminantes acuosos de relevancia social; tales como herbicidas, residuos industriales, metabolitos de fármacos de uso masivo, etc. IMPORTANCIA DEL PROYECTO. Los sistemas nano/micro-particulados PIM-FS que se propone desarrollar en este proyecto se presentan como candidatos ideales para tratamientos específicos de efluentes industriales y domésticos en los cuales se desea lograr la degradación selectiva de compuestos orgánicos. Los conocimientos adquiridos serán indispensables para construir una plataforma versátil de sistemas foto-catalíticos específicos para la degradación de diversos contaminantes orgánicos de interés social. En lo referente a la formación de recursos humanos, el proyecto propuesto contribuirá en forma directa a la formación de 3 estudiantes de postgrado y 2 estudiantes de grado. En las capacidades institucionales se contribuirá al acondicionamiento del Laboratorio para Microscopía Óptica Avanzada (LMOA) en el Dpto. de Química de la UNRC y al montaje de un sistema de microscopio de fluorescencia que permitirá la aplicación de técnicas avanzadas de espectroscopia de fluorescencia de molecula individual. Water-soluble organic molecules such as certain non-biodegradable herbicides, industrial dyes and metabolites of widespread use drugs are a major source of pollution in groundwater from agricultural areas and in industrial and domestic effluents. Photo-catalytic reactions by UV-visible irradiation and organic sensitizers are one of the most economical and convenient methods for the decomposition of pollutants into harmless byproducts. In many applications it is highly desirable a high degree of specificity, effectiveness and speed of degradation of specific pollutants present in a complex mixture. In particular nano/micro-particles systems that form stable aqueous suspensions are highly desirable because they allow for easy application and effective decontamination of large volumes of fluids. Herein we propose the development of nano/micro particles composed by molecularly imprinted polymers (MIP) and photo-sensitizers (PS). The specific binding of MIP and the photo-catalytic ability of the sensitizers are used to achieve the photo-decomposition of specific "template" molecules in complex mixtures. Mini-emulsion polymerization techniques will be used to synthesize nano/micro MIP-FS systems. Spectroscopy (steady-state and time resolved) and chromatography (GC and HPLC) will be used to characterize efficiency, mechanisms and specificity of photo-degradation in these systems. In addition single molecule/particle fluorescence spectroscopy techniques will be used to directly measure distributions of binding affinities and photo-degradation efficiency in individual particles. The proposed studies point to a more detailed understanding of the factors affecting the photo-degradation efficiency in nano/micro-particles and to apply that knowledge in the design of optimized systems for photo-selective destruction of socially relevant aqueous pollutants.
Resumo:
The purpose of this study was to evaluate the determinism of the AS-lnterface network and the 3 main families of control systems, which may use it, namely PLC, PC and RTOS. During the course of this study the PROFIBUS and Ethernet field level networks were also considered in order to ensure that they would not introduce unacceptable latencies into the overall control system. This research demonstrated that an incorrectly configured Ethernet network introduces unacceptable variable duration latencies into the control system, thus care must be exercised if the determinism of a control system is not to be compromised. This study introduces a new concept of using statistics and process capability metrics in the form of CPk values, to specify how suitable a control system is for a given control task. The PLC systems, which were tested, demonstrated extremely deterministic responses, but when a large number of iterations were introduced in the user program, the mean control system latency was much too great for an AS-I network. Thus the PLC was found to be unsuitable for an AS-I network if a large, complex user program Is required. The PC systems, which were tested were non-deterministic and had latencies of variable duration. These latencies became extremely exaggerated when a graphing ActiveX was included in the control application. These PC systems also exhibited a non-normal frequency distribution of control system latencies, and as such are unsuitable for implementation with an AS-I network. The RTOS system, which was tested, overcame the problems identified with the PLC systems and produced an extremely deterministic response, even when a large number of iterations were introduced in the user program. The RTOS system, which was tested, is capable of providing a suitable deterministic control system response, even when an extremely large, complex user program is required.
Resumo:
THESIS ABSTRACT : Low-temperature thermochronology relies on application of radioisotopic systems whose closure temperatures are below temperatures at which the dated phases are formed. In that sense, the results are interpreted as "cooling ages" in contrast to "formation ages". Owing to the low closure-temperatures, it is possible to reconstruct exhumation and cooling paths of rocks during their residence at shallow levels of the crust, i.e. within first ~10 km of depth. Processes occurring at these shallow depths such as final exhumation, faulting and relief formation are fundamental for evolution of the mountain belts. This thesis aims at reconstructing the tectono-thermal history of the Aar massif in the Central Swiss Alps by means of zircon (U-Th)/He, apatite (U-Th)/He and apatite fission track thermochronology. The strategy involved acquisition of a large number of samples from a wide range of elevations in the deeply incised Lötschen valley and a nearby NEAT tunnel. This unique location allowed to precisely constrain timing, amount and mechanisms of exhumation of the main orographic feature of the Central Alps, evaluate the role of topography on the thermochronological record and test the impact of hydrothermal activity. Samples were collected from altitudes ranging between 650 and 3930 m and were grouped into five vertical profiles on the surface and one horizontal in the tunnel. Where possible, all three radiometric systems were applied to each sample. Zircon (U-Th)/He ages range from 5.1 to 9.4 Ma and are generally positively correlated with altitude. Age-elevation plots reveal a distinct break in slope, which translates into exhumation rate increasing from ~0.4 to ~3 km/Ma at 6 Ma. This acceleration is independently confirmed by increased cooling rates on the order of 100°C/Ma constrained on the basis of age differences between the zircon (U-Th)/He and the remaining systems. Apatite fission track data also plot on a steep age-elevation curve indicating rapid exhumation until the end of the Miocene. The 6 Ma event is interpreted as reflecting tectonically driven uplift of the Aar massif. The late Miocene timing implies that the increase of precipitation in the Pliocene did not trigger rapid exhumation in the Aar massif. The Messinian salinity crisis in the Mediterranean could not directly intensify erosion of the Aar but associated erosional output from the entire Alps may have tapered the orogenic wedge and caused reactivation of thrusting in the Aar massif. The high exhumation rates in the Messinian were followed by a decrease to ~1.3 km/Ma as evidenced by ~8 km of exhumation during last 6 Ma. The slowing of exhumation is also apparent from apatite (U-Th)1He age-elevation data in the northern part of the Lötschen valley where they plot on a ~0.5km/Ma line and range from 2.4 to 6.4 Ma However, from the apatite (U-Th)/He and fission track data from the NEAT tunnel, there is an indication of a perturbation of the record. The apatite ages are youngest under the axis of the valley, in contrast to an expected pattern where they would be youngest in the deepest sections of the tunnel due to heat advection into ridges. The valley however, developed in relatively soft schists while the ridges are built of solid granitoids. In line with hydrological observations from the tunnel, we suggest that the relatively permeable rocks under the valley floor, served as conduits of geothermal fluids that caused reheating leading to partial Helium loss and fission track annealing in apatites. In consequence, apatite ages from the lowermost samples are too young and the calculated exhumation rates may underestimate true values. This study demonstrated that high-density sampling is indispensable to provide meaningful thermochronological data in the Alpine setting. The multi-system approach allows verifying plausibility of the data and highlighting sources of perturbation. RÉSUMÉ DE THÈSE : La thermochronologie de basse température dépend de l'utilisation de systèmes radiométriques dont la température de fermeture est nettement inférieure à la température de cristallisation du minéral. Les résultats obtenus sont par conséquent interprétés comme des âges de refroidissement qui diffèrent des âges de formation obtenus par le biais d'autres systèmes de datation. Grâce aux températures de refroidissement basses, il est aisé de reconstruire les chemins de refroidissement et d'exhumation des roches lors de leur résidence dans la croute superficielle (jusqu'à 10 km). Les processus qui entrent en jeu à ces faibles profondeurs tels que l'exhumation finale, la fracturation et le faillage ainsi que la formation du relief sont fondamentaux dans l'évolution des chaînes de montagne. Ces dernières années, il est devenu clair que l'enregistrement thermochronologique dans les orogènes peut être influencé par le relief et réinitialisé par l'advection de la chaleur liée à la circulation de fluides géothermaux après le refroidissement initial. L'objectif de cette thèse est de reconstruire l'histoire tectono-thermique du massif de l'Aar dans les Alpes suisses Centrales à l'aide de trois thermochronomètres; (U-Th)/He sur zircon, (U-Th)/He sur apatite et les traces de fission sur apatite. Afin d'atteindre cet objectif, nous avons récolté un grand nombre d'échantillons provenant de différentes altitudes dans la vallée fortement incisée de Lötschental ainsi que du tunnel de NEAT. Cette stratégie d'échantillonnage nous a permis de contraindre de manière précise la chronologie, les quantités et les mécanismes d'exhumation de cette zone des Alpes Centrales, d'évaluer le rôle de la topographie sur l'enregistrement thermochronologique et de tester l'impact de l'hydrothermalisme sur les géochronomètres. Les échantillons ont été prélevés à des altitudes comprises entre 650 et 3930m selon 5 profils verticaux en surface et un dans le tunnel. Quand cela à été possible, les trois systèmes radiométriques ont été appliqués aux échantillons. Les âges (U-Th)\He obtenus sur zircons sont compris entre 5.l et 9.4 Ma et sont corrélés de manière positive avec l'altitude. Les graphiques représentant l'âge et l'élévation montrent une nette rupture de la pente qui traduisent un accroissement de la vitesse d'exhumation de 0.4 à 3 km\Ma il y a 6 Ma. Cette accélération de l'exhumation est confirmée par les vitesses de refroidissement de l'ordre de 100°C\Ma obtenus à partir des différents âges sur zircons et à partir des autres systèmes géochronologiques. Les données obtenues par traces de fission sur apatite nous indiquent également une exhumation rapide jusqu'à la fin du Miocène. Nous interprétons cet évènement à 6 Ma comme étant lié à l'uplift tectonique du massif de l'Aar. Le fait que cet évènement soit tardi-miocène implique qu'une augmentation des précipitations au Pliocène n'a pas engendré cette exhumation rapide du massif de l'Aar. La crise Messinienne de la mer méditerranée n'a pas pu avoir une incidence directe sur l'érosion du massif de l'Aar mais l'érosion associée à ce phénomène à pu réduire le coin orogénique alpin et causer la réactivation des chevauchements du massif de l'Aar. L'exhumation rapide Miocène a été suivie pas une diminution des taux d'exhumation lors des derniers 6 Ma (jusqu'à 1.3 km\Ma). Cependant, les âges (U-Th)\He sur apatite ainsi que les traces de fission sur apatite des échantillons du tunnel enregistrent une perturbation de l'enregistrement décrit ci-dessus. Les âges obtenus sur les apatites sont sensiblement plus jeunes sous l'axe de la vallée en comparaison du profil d'âges attendus. En effet, on attendrait des âges plus jeunes sous les parties les plus profondes du tunnel à cause de l'advection de la chaleur dans les flancs de la vallée. La vallée est creusée dans des schistes alors que les flancs de celle-ci sont constitués de granitoïdes plus durs. En accord avec les observations hydrologiques du tunnel, nous suggérons que la perméabilité élevée des roches sous l'axe de la vallée à permi l'infiltration de fluides géothermaux qui a généré un réchauffement des roches. Ce réchauffement aurait donc induit une perte d'Hélium et un recuit des traces de fission dans les apatites. Ceci résulterait en un rajeunissement des âges apatite et en une sous-estimation des vitesses d'exhumation sous l'axe de la vallée. Cette étude à servi à démontrer la nécessité d'un échantillonnage fin et précis afin d'apporter des données thermochronologiques de qualité dans le contexte alpin. Cette approche multi-système nous a permi de contrôler la pertinence des données acquises ainsi que d'identifier les sources possibles d'erreurs lors d'études thermochronologiques. RÉSUMÉ LARGE PUBLIC Lors d'une orogenèse, les roches subissent un cycle comprenant une subduction, de la déformation, du métamorphisme et, finalement, un retour à la surface (ou exhumation). L'exhumation résulte de la déformation au sein de la zone de collision, menant à un raccourcissement et un apaissessement de l'édifice rocheux, qui se traduit par une remontée des roches, création d'une topographie et érosion. Puisque l'érosion agit comme un racloir sur la partie supérieure de l'édifice, des tentatives de corrélation entre les épisodes d'exhumation rapide et les périodes d'érosion intensive, dues aux changements climatiques, ont été effectuées. La connaissance de la chronologie et du lieu précis est d'une importance capitale pour une quelconque reconstruction de l'évolution d'une chaîne de montagne. Ces critères sont donnés par un retraçage des changements de la température de la roche en fonction du temps, nous donnant le taux de refroidissement. L'instant auquel les roches ont refroidit, passant une certaine température, est contraint par l'application de techniques de datation par radiométrie. Ces méthodes reposent sur la désintégration des isotopes radiogéniques, tels que l'uranium et le potassium, tous deux abondants dans les roches de la croûte terrestre. Les produits de cette désintégration ne sont pas retenus dans les minéraux hôtes jusqu'au moment du refroidissement de la roche sous une température appelée 'de fermeture' , spécifique à chaque système de datation. Par exemple, la désintégration radioactive des atomes d'uranium et de thorium produit des atomes d'hélium qui s'échappent d'un cristal de zircon à des températures supérieures à 200°C. En mesurant la teneur en uranium-parent, l'hélium accumulé et en connaissant le taux de désintégration, il est possible de calculer à quel moment la roche échantillonnée est passée sous la température de 200°C. Si le gradient géothermal est connu, les températures de fermeture peuvent être converties en profondeurs actuelles (p. ex. 200°C ≈ 7km), et le taux de refroidissement en taux d'exhumation. De plus, en datant par système radiométrique des échantillons espacés verticalement, il est possible de contraindre directement le taux d'exhumation de la section échantillonnée en observant les différences d'âges entre des échantillons voisins. Dans les Alpes suisses, le massif de l'Aar forme une structure orographique majeure. Avec des altitudes supérieures à 4000m et un relief spectaculaire de plus de 2000m, le massif domine la partie centrale de la chaîne de montagne. Les roches aujourd'hui exposées à la surface ont été enfouies à plus de 10 km de profond il y a 20 Ma, mais la topographie actuelle du massif de l'Aar semble surtout s'être développée par un soulèvement actif depuis quelques millions d'années, c'est-à-dire depuis le Néogène supérieur. Cette période comprend un changement climatique soudain ayant touché l'Europe il y a environ 5 Ma et qui a occasionné de fortes précipitations, entraînant certainement une augmentation de l'érosion et accélérant l'exhumation des Alpes. Dans cette étude, nous avons employé le système de datation (U-TH)/He sur zircon, dont la température de fermeture de 200°C est suffisamment basse pour caractériser l'exhumation du Néogène sup. /Pliocène. Les échantillons proviennent du Lötschental et du tunnel ferroviaire le plus profond du monde (NEAT) situé dans la partie ouest du massif de l'Aar. Considérés dans l'ensemble, ces échantillons se répartissent sur un dénivelé de 3000m et des âges de 5.1 à 9.4 Ma. Les échantillons d'altitude supérieure (et donc plus vieux) documentent un taux d'exhumation de 0.4 km/Ma jusqu'à il y a 6 Ma, alors que les échantillons situés les plus bas ont des âges similaires allant de 6 à 5.4 Ma, donnant un taux jusqu'à 3km /Ma. Ces données montrent une accélération dramatique de l'exhumation du massif de l'Aar il y a 6 Ma. L'exhumation miocène sup. du massif prédate donc le changement climatique Pliocène. Cependant, lors de la crise de salinité d'il y a 6-5.3 Ma (Messinien), le niveau de la mer Méditerranée est descendu de 3km. Un tel abaissement de la surface d'érosion peut avoir accéléré l'exhumation des Alpes, mais le bassin sud alpin était trop loin du massif de l'Aar pour influencer son érosion. Nous arrivons à la conclusion que la datation (U-Th)/He permet de contraindre précisément la chronologie et l'exhumation du massif de l'Aar. Concernant la dualité tectonique-érosion, nous suggérons que, dans le cas du massif de l'Aar, la tectonique prédomine.
Resumo:
This report was developed to provide summary information to allow state agency staff, practitioners and juvenile justice system officials to access specific sections of Iowa’s Three Year Plan. It includes the “Service Network” section of Iowa’s 2006 Juvenile Justice and Delinquency Prevention Act formula grant Three-Year Plan. The complete Three Year Plan serves as Iowa’s application for Juvenile Justice and Delinquency Prevention Act formula grant funding. The information included in this report overviews some of the systems and services that relate to Iowa’s delinquency and CINA systems. The systems and services discussed include substance abuse , mental health, alternative or special education, and job training.
Resumo:
PURPOSE: As no curative treatment for advanced pancreatic and biliary cancer with malignant ascites exists, new modalities possibly improving the response to available chemotherapies must be explored. This phase I study assesses the feasibility, tolerability and pharmacokinetics of a regional treatment of gemcitabine administered in escalating doses by the stop-flow approach to patients with advanced abdominal malignancies (adenocarcinoma of the pancreas, n = 8, and cholangiocarcinoma of the liver, n = 1). EXPERIMENTAL DESIGN: Gemcitabine at 500, 750 and 1,125 mg/m(2) was administered to three patients at each dose level by loco-regional chemotherapy, using hypoxic abdominal stop-flow perfusion. This was achieved by an aorto-caval occlusion by balloon catheters connected to an extracorporeal circuit. Gemcitabine and its main metabolite 2',2'-difluorodeoxyuridine (dFdU) concentrations were measured by high performance liquid chromatography with UV detection in the extracorporeal circuit during the 20 min of stop-flow perfusion, and in peripheral plasma for 420 min. Blood gases were monitored during the stop-flow perfusion and hypoxia was considered stringent if two of the following endpoints were met: pH </= 7.2, pO(2) nadir ratio </=0.70 or pCO(2) peak ratio >/=1.35. The tolerability of this procedure was also assessed. RESULTS: Stringent hypoxia was achieved in four patients. Very high levels of gemcitabine were rapidly reached in the extracorporeal circuit during the 20 min of stop-flow perfusion, with C (max) levels in the abdominal circuit of 246 (+/-37%), 2,039 (+/-77%) and 4,780 (+/-7.3%) mug/ml for the three dose levels 500, 750 and 1,125 mg/m(2), respectively. These C (max) were between 13 (+/-51%) and 290 (+/-12%) times higher than those measured in the peripheral plasma. Similarly, the abdominal exposure to gemcitabine, calculated as AUC(t0-20), was between 5.5 (+/-43%) and 200 (+/-66%)-fold higher than the systemic exposure. Loco-regional exposure to gemcitabine was statistically higher in presence of stringent hypoxia (P < 0.01 for C (max) and AUC(t0-20), both normalised to the gemcitabine dose). Toxicities were acceptable considering the complexity of the procedure and were mostly hepatic; it was not possible to differentiate the respective contributions of systemic and regional exposures. A significant correlation (P < 0.05) was found between systemic C (max) of gemcitabine and the nadir of both leucocytes and neutrophils. CONCLUSIONS: Regional exposure to gemcitabine-the current standard drug for advanced adenocarcinoma of the pancreas-can be markedly enhanced using an optimised hypoxic stop-flow perfusion technique, with acceptable toxicities up to a dose of 1,125 mg/m(2). However, the activity of gemcitabine under hypoxic conditions is not as firmly established as that of other drugs such as mitomycin C, melphalan or tirapazamine. Further studies of this investigational modality, but with bioreductive drugs, are therefore warranted first to evaluate the tolerance in a phase I study and later on to assess whether it does improve the response to chemotherapy.
Space Competition and Time Delays in Human Range Expansions. Application to the Neolithic Transition
Resumo:
Space competition effects are well-known in many microbiological and ecological systems. Here we analyze such an effectin human populations. The Neolithic transition (change from foraging to farming) was mainly the outcome of a demographic process that spread gradually throughout Europe from the Near East. In Northern Europe, archaeological data show a slowdown on the Neolithic rate of spread that can be related to a high indigenous (Mesolithic) population density hindering the advance as a result of the space competition between the two populations. We measure this slowdown from a database of 902 Early Neolithic sites and develop a time-delayed reaction-diffusion model with space competition between Neolithic and Mesolithic populations, to predict the observed speeds. The comparison of the predicted speed with the observations and with a previous non-delayed model show that both effects, the time delay effect due to the generation lag and the space competition between populations, are crucial in order to understand the observations
Resumo:
Terveydenhuollossa käytetään nykyisin informaatioteknologian (IT) mahdollisuuksia parantamaan hoidon laatua, vähentämään hoitoon liittyviä kuluja sekä yksinkertaistamaan ja selkeyttämään laakareiden työnkulkua. Tietojärjestelmät, jotka edustavat jokaisen IT-ratkaisun ydintä, täytyy kehittää täyttämään lukuisia vaatimuksia, ja yksi niistä on kyky integroitua saumattomasti toisten tietojärjestelmien kanssa. Järjestelmäintegraatio on kuitenkin yhä haastava tehtävä, vaikka sita varten on kehitetty useita standardeja. Tässä työssä kuvataan vastakehitetyn lääketieteellisen tietojärjestelmän liittymäratkaisu. Työssä pohditaan vaatimuksia, jotka tällaiselle sovellukselle asetetaan, ja myös tapa, jolla vaatimukset toteutuvat on esitetty. Liittymaratkaisu on jaettu kahteen osaan, tietojärjestelmaliittymään ja "liittymakoneeseen" (interfacing engine). Edellinen on käsittää perustoiminnallisuuden, jota tarvitaan vastaanottamaan ja lähettämään tietoa toisiin järjestelmiin, kun taas jälkimmäinen tarjoaa tuen tuotantoympäristössa käytettäville standardeille. Molempien osien suunnitelu on esitelty perusteellisesti tässä työssä. Ongelma ratkaistiin modulaarisen ja geneerisen suunnittelun avulla. Tämä lähestymistapa osoitetaan työssä kestäväksi ja joustavaksi ratkaisuksi, jota voidaan käyttää tarkastelemaan laajaa valikoimaa liittymäratkaisulle asetettuja vaatimuksia. Lisaksi osoitetaan kuinka tehty ratkaisu voidaan joustavuutensa ansiosta helposti mukauttaa vaatimuksiin, joita ei ole etukäteen tunnistettu, ja siten saavutetaan perusta myös tulevaisuuden tarpeille
Resumo:
There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
Fast changing environment sets pressure on firms to share large amount of information with their customers and suppliers. The terms information integration and information sharing are essential for facilitating a smooth flow of information throughout the supply chain, and the terms are used interchangeably in research literature. By integrating and sharing information, firms want to improve their logistics performance. Firms share information with their suppliers and customers by using traditional communication methods (telephone, fax, Email, written and face-to-face contacts) and by using advanced or modern communication methods such as electronic data interchange (EDI), enterprise resource planning (ERP), web-based procurement systems, electronic trading systems and web portals. Adopting new ways of using IT is one important resource for staying competitive on the rapidly changing market (Saeed et al. 2005, 387), and an information system that provides people the information they need for performing their work, will support company performance (Boddy et al. 2005, 26). The purpose of this research has been to test and understand the relationship between information integration with key suppliers and/or customers and a firm’s logistics performance, especially when information technology (IT) and information systems (IS) are used for integrating information. Quantitative and qualitative research methods have been used to perform the research. Special attention has been paid to the scope, level and direction of information integration (Van Donk & van der Vaart 2005a). In addition, the four elements of integration (Jahre & Fabbe-Costes 2008) are closely tied to the frame of reference. The elements are integration of flows, integration of processes and activities, integration of information technologies and systems and integration of actors. The study found that information integration has a low positive relationship to operational performance and a medium positive relationship to strategic performance. The potential performance improvements found in this study vary from efficiency, delivery and quality improvements (operational) to profit, profitability or customer satisfaction improvements (strategic). The results indicate that although information integration has an impact on a firm’s logistics performance, all performance improvements have not been achieved. This study also found that the use of IT and IS have a mediocre positive relationship to information integration. Almost all case companies agreed on that the use of IT and IS could facilitate information integration and improve their logistics performance. The case companies felt that an implementation of a web portal or a data bank would benefit them - enhance their performance and increase information integration.
Resumo:
-
Resumo:
The aim of this thesis is to propose a novel control method for teleoperated electrohydraulic servo systems that implements a reliable haptic sense between the human and manipulator interaction, and an ideal position control between the manipulator and the task environment interaction. The proposed method has the characteristics of a universal technique independent of the actual control algorithm and it can be applied with other suitable control methods as a real-time control strategy. The motivation to develop this control method is the necessity for a reliable real-time controller for teleoperated electrohydraulic servo systems that provides highly accurate position control based on joystick inputs with haptic capabilities. The contribution of the research is that the proposed control method combines a directed random search method and a real-time simulation to develop an intelligent controller in which each generation of parameters is tested on-line by the real-time simulator before being applied to the real process. The controller was evaluated on a hydraulic position servo system. The simulator of the hydraulic system was built based on Markov chain Monte Carlo (MCMC) method. A Particle Swarm Optimization algorithm combined with the foraging behavior of E. coli bacteria was utilized as the directed random search engine. The control strategy allows the operator to be plugged into the work environment dynamically and kinetically. This helps to ensure the system has haptic sense with high stability, without abstracting away the dynamics of the hydraulic system. The new control algorithm provides asymptotically exact tracking of both, the position and the contact force. In addition, this research proposes a novel method for re-calibration of multi-axis force/torque sensors. The method makes several improvements to traditional methods. It can be used without dismantling the sensor from its application and it requires smaller number of standard loads for calibration. It is also more cost efficient and faster in comparison to traditional calibration methods. The proposed method was developed in response to re-calibration issues with the force sensors utilized in teleoperated systems. The new approach aimed to avoid dismantling of the sensors from their applications for applying calibration. A major complication with many manipulators is the difficulty accessing them when they operate inside a non-accessible environment; especially if those environments are harsh; such as in radioactive areas. The proposed technique is based on design of experiment methodology. It has been successfully applied to different force/torque sensors and this research presents experimental validation of use of the calibration method with one of the force sensors which method has been applied to.
Resumo:
The main objectives of the present investigation were to evaluate the qualitative and quantitative distribution of natural cyanobacterial population and their ecobiological properties along the Cochin estuary and their application in aquaculture systems as a nutritional supplement due to their nutrient-rich biochemical composition and antioxidant potential. This thesis presents a detailed account of the distribution of cyanobacteria in Cochin estuary, an assessment of physico-chemical parameters and the nutrients of the study site, an evaluation of the effect of physico-chemical parameters on cyanobacterial distribution and abundance, isolation, identification and culturing of cyanobacteria, the biochemical composition an productivity of cyanobacteria, and an evaluation of the potential of the selected cyanobacteria as antioxidants against ethanol induced lipid peroxidation. The pH, salinity and nutritional requirements were optimized for low-cost production of the selected cyanobacterial strains. The present study provides an insight into the distribution, abundance, diversity and ecology of cyanobacteria of Cochin estuary. From the results, it is evident that the ecological conditions of Cochin estuary support a rich cyanobacterial growth.
Resumo:
Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------
Resumo:
Solid phase reaction of NiPt/Si and NiPt/SiGe is one of the key issues for silicide (germanosilicide) technology. Especially, the NiPtSiGe, in which four elements are involved, is a very complex system. As a result, a detailed study is necessary for the interfacial reaction between NiPt alloy film and SiGe substrate. Besides using traditional material characterization techniques, characterization of Schottky diode is a good measure to detect the interface imperfections or defects, which are not easy to be found on large area blanket samples. The I-V characteristics of 10nm Ni(Pt=0, 5, 10 at.%) germanosilicides/n-Si₀/₇Ge₀.₃ and silicides/n-Si contact annealed at 400 and 500°C were studied. For Schottky contact on n-Si, with the addition of Pt in the Ni(Pt) alloy, the Schottky barrier height (SBH) increases greatly. With the inclusion of a 10% Pt, SBH increases ~0.13 eV. However, for the Schottky contacts on SiGe, with the addition of 10% Pt, the increase of SBH is only ~0.04eV. This is explained by pinning of the Fermi level. The forward I-V characteristics of 10nm Ni(Pt=0, 5, 10 at.%)SiGe/SiGe contacts annealed at 400°C were investigated in the temperature range from 93 to 300K. At higher temperature (>253K) and larger bias at low temperature (<253K), the I-V curves can be well explained by a thermionic emission model. At lower temperature, excess currents at lower forward bias region occur, which can be explained by recombination/generation or patches due to inhomogenity of SBH with pinch-off model or a combination of the above mechanisms.