947 resultados para Which-way experiments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Swift heavy ion irradiation (ions with mass heavier than 15 and energy exceeding MeV/amu) transfer their energy mainly to the electronic system with small momentum transfer per collision. Therefore, they produce linear regions (columnar nano-tracks) around the straight ion trajectory, with marked modifications with respect to the virgin material, e.g., phase transition, amorphization, compaction, changes in physical or chemical properties. In the case of crystalline materials the most distinctive feature of swift heavy ion irradiation is the production of amorphous tracks embedded in the crystal. Lithium niobate is a relevant optical material that presents birefringence due to its anysotropic trigonal structure. The amorphous phase is certainly isotropic. In addition, its refractive index exhibits high contrast with those of the crystalline phase. This allows one to fabricate waveguides by swift ion irradiation with important technological relevance. From the mechanical point of view, the inclusion of an amorphous nano-track (with a density 15% lower than that of the crystal) leads to the generation of important stress/strain fields around the track. Eventually these fields are the origin of crack formation with fatal consequences for the integrity of the samples and the viability of the method for nano-track formation. For certain crystal cuts (X and Y), these fields are clearly anisotropic due to the crystal anisotropy. We have used finite element methods to calculate the stress/strain fields that appear around the ion-generated amorphous nano-tracks for a variety of ion energies and doses. A very remarkable feature for X cut-samples is that the maximum shear stress appears on preferential planes that form +/-45º with respect to the crystallographic planes. This leads to the generation of oriented surface cracks when the dose increases. The growth of the cracks along the anisotropic crystal has been studied by means of novel extended finite element methods, which include cracks as discontinuities. In this way we can study how the length and depth of a crack evolves as function of the ion dose. In this work we will show how the simulations compare with experiments and their application in materials modification by ion irradiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Swift heavy ion irradiation (ions with mass heavier than 15 and energy exceeding MeV/amu) transfer their energy mainly to the electronic system with small momentum transfer per collision. Therefore, they produce linear regions (columnar nano-tracks) around the straight ion trajectory, with marked modifications with respect to the virgin material, e.g., phase transition, amorphization, compaction, changes in physical or chemical properties. In the case of crystalline materials the most distinctive feature of swift heavy ion irradiation is the production of amorphous tracks embedded in the crystal. Lithium niobate is a relevant optical material that presents birefringence due to its anysotropic trigonal structure. The amorphous phase is certainly isotropic. In addition, its refractive index exhibits high contrast with those of the crystalline phase. This allows one to fabricate waveguides by swift ion irradiation with important technological relevance. From the mechanical point of view, the inclusion of an amorphous nano-track (with a density 15% lower than that of the crystal) leads to the generation of important stress/strain fields around the track. Eventually these fields are the origin of crack formation with fatal consequences for the integrity of the samples and the viability of the method for nano-track formation. For certain crystal cuts (X and Y), these fields are clearly anisotropic due to the crystal anisotropy. We have used finite element methods to calculate the stress/strain fields that appear around the ion- generated amorphous nano-tracks for a variety of ion energies and doses. A very remarkable feature for X cut-samples is that the maximum shear stress appears on preferential planes that form +/-45º with respect to the crystallographic planes. This leads to the generation of oriented surface cracks when the dose increases. The growth of the cracks along the anisotropic crystal has been studied by means of novel extended finite element methods, which include cracks as discontinuities. In this way we can study how the length and depth of a crack evolves as function of the ion dose. In this work we will show how the simulations compare with experiments and their application in materials modification by ion irradiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this proposal is to join together the owners of the most advanced CPV technology, with respect to the state of the art, in order to research from its leading position new applications for CPV systems. In addition to opening up new markets, it will unveil possible sources of failure in new environments outside Europe, in order to assure component reliability. The proposed project will also try to improve the current technology of the industrial partners (ISOFOTON and CONCENTRIX) by accelerating the learning curve that CPV must follow in order to reach the competitive market, and lowering the cost under the current flat panel PV significantly within 3-4 years. The use of CPV systems in remote areas, together with harsher radiation, ambient and infrastructure conditions will help to increase the rate of progress of this technology. In addition, the ISFOC s contribution, which brings together seven power plants from seven CPV technologies up to 3 MWpeak, will allow creating the most complete database of components and systems performance to be generated as well as the effects of radiation and meteorology on systems operations. Finally, regarding the new applications for CPV subject, the project will use a CPV system sized 25 kWp in a stand-alone station in Egypt (NWRC) for the first time for water pumping and irrigation purposes. In a similar way ISOFOTON will connect up to 25 kWp CPV to the Moroccan ONE utility grid. From the research content point of view of this project, which is directly addressed by the scope of the call, the cooperative research between UPM, FhG-ISE and the two companies will be favoured by the fact that all are progressing in similar directions: developing two-stage optics CPV systems. In addition to these technology improvements the UPM is very interested in developing a new concept of module, recently patented, which will fulfil all required characteristics of a good CPV with less components and reducing cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While workflow technology has gained momentum in the last decade as a means for specifying and enacting computational experiments in modern science, reusing and repurposing existing workflows to build new scientific experiments is still a daunting task. This is partly due to the difficulty that scientists experience when attempting to understand existing workflows, which contain several data preparation and adaptation steps in addition to the scientifically significant analysis steps. One way to tackle the understandability problem is through providing abstractions that give a high-level view of activities undertaken within workflows. As a first step towards abstractions, we report in this paper on the results of a manual analysis performed over a set of real-world scientific workflows from Taverna and Wings systems. Our analysis has resulted in a set of scientific workflow motifs that outline i) the kinds of data intensive activities that are observed in workflows (data oriented motifs), and ii) the different manners in which activities are implemented within workflows (workflow oriented motifs). These motifs can be useful to inform workflow designers on the good and bad practices for workflow development, to inform the design of automated tools for the generation of workflow abstractions, etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the field of Room Acoustics it is common using scale models to study a room. Through this method it is possible to predict its behavior, which may be very useful to detect and correct any problem prior to build it, saving many resources. Nowadays this method has been relegated to a secondary position due to the peak of simulation software, which makes possible studying rooms in a cheap, flexible and simple way, as well as it is potentially less time consuming. Nevertheless, the scale model method is still under study, as it may give some additional information. This project intends to focus in pedagogic possibilities of the scale model method. This method offers the student the opportunity of study and grasp some of the most important phenomena in Room Acoustics, in a more intuitive way than just a software simulation. Furthermore most of the existing software in this field is aimed to the technician working in the lab, as efficiently as possible, not to the student trying to understand and learn something. Here, the facilities and resources of Syddansk Universitet regarding this matter will be studied and evaluated, as well as the procedure for the experiments, paying special attention not only to its reliability and accuracy, but also to its didactic possibilities. Besides, if possible, any improvement that could help to enhance any of the listed aspects will be suggested. En el ámbito de la Acústica Arquitectónica es común el uso de modelos a escala para estudiar un recinto determinado. Mediante esta técnica es posible por ejemplo predecir el comportamiento del recinto y detectar problemas antes de su construcción, con el consecuente ahorro de recursos. Actualmente el uso de modelos a escala está desplazado a un segundo plano por el uso de software simulación, debido a la sencillez y flexibilidad que puede aportar la simulación por ordenador, así como a la economía de tiempo y recursos que supone. Sin embargo sigue siendo objeto de estudio, dado que puede aportar información muy valiosa para el ingeniero. Este proyecto se centra en las posibilidades pedagógicas de dicho método. El uso de modelos a escala brinda la oportunidad a los estudiantes de estudiar y comprender algunos de los fenómenos más importantes en la Acústica Arquitectónica de una forma más directa e intuitiva que una simulación por ordenador. Se pretende estudiar y evaluar los medios al alcance de los estudiantes en la Syddansk Universitet, así como los métodos usados, atendiendo no sólo a su precisión y fiabilidad, si no a su potencial pedagógico. Así mismo, si es posible, se propondrán cambios que puedan suponer una mejora en cualquiera de estos aspectos. Así el proyecto se divide en varias secciones claramente diferenciadas. En el apartado Background and Theoretical Basis se introduce el tema del estudio y simulación de recintos acústicos. Se explica su importancia y utilidad, y se comenta la situación actual de estas técnicas, abordando diferentes métodos usados así como sus bases teóricas y principales ventajas e inconvenientes. Bajo el apartado de Project se analizan diferentes factores relacionados con el problema. Se estudian los recursos a disposición del alumno, desde el software y hardware implicados hasta el equipo de medida y otros recursos necesarios para la realización de las prácticas. Es en esta parte donde se centra la parte más importante del trabajo, consistente en la medición y comprobación de las características más relevantes del equipo implicado. Haciendo posible así confirmar su validez y precisión, tanto desde el punto de vista técnico como pedagógico, así como estableciendo los límites dentro de los que se puede considerar fiable el modelo. Al final de este apartado se aborda la influencia de la absorción del aire en altas frecuencias, y la variación en los coeficientes de absorción y dispersión de los materiales respecto de la frecuencia. Por último se realiza una verificación subjetiva del sistema completo, debido a que por limitaciones técnicas no ha sido posible evaluar el montaje en el rango equivalente a toda la banda audible, y que los métodos estudiados tienen como meta última asegurar una buena percepción por parte del oyente en el recinto dado. Dentro del apartado Conclusions se hace un breve resumen de las conclusiones extraídas anteriormente, y se valora el rendimiento y utilidad general del modelo, que a pesar de algunos problemas de precisión y repetibilidad lógicos debido a los medios usados, es válido para ilustrar los fenómenos físicos que se quieren enseñar al alumno. En la sección de Future Work se proponen diferentes vías de trabajo para futuros proyectos en la Syddansk Universitet que podrían ser útiles confirmar el trabajo realizado en este proyecto, mejorar la precisión y fiabilidad del montaje o enriquecer las posibilidades pedagógicas de las prácticas relacionadas. Por último se encuentra, tras el apartado de referencias, los anexos con tablas y gráficas relativas a las medidas realizadas en diferentes partes del trabajo. También se puede encontrar información y material relacionado con el proyecto en el CD adjunto.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The peak temperature in the corona of plasma ejected by a laser-irradiated slab is discussed in terms of a one-electron-temperature model. Both heat-flux saturation and pulse rise-time effects are considered;the intensity in the rising half of the pulse is approximated by a linear function of time, I(t) = Iot/r. The temperature is found to be proportional to (IQX2)273 and a function of I0X4/r. Above a certain value of I0X4/T, the plasma presents two characteristic temperatures (at saturation and at the critical surface) which can be identified with experimentally observed cold- and hot-electron temperatures. The results are compared with extensive experimental data available for both nd and CO2 lasers, I0(W'cnf2) X2 (/um) starting around 1012. The agreement is good if substantial flux inhibition is assumed (flux-limit factor f = 0.03), and fails for I0X2 above 1O1S. Results for both ablation pressure and mass ablation rate are also given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Durante el transcurso de esta Tesis Doctoral se ha realizado un estudio de la problemática asociada al desarrollo de sistemas de interacción hombre-máquina sensibles al contexto. Este problema se enmarca dentro de dos áreas de investigación: los sistemas interactivos y las fuentes de información contextual. Tradicionalmente la integración entre ambos campos se desarrollaba a través de soluciones verticales específicas, que abstraen a los sistemas interactivos de conocer los procedimientos de bajo nivel de acceso a la información contextual, pero limitan su interoperabilidad con otras aplicaciones y fuentes de información. Para solventar esta limitación se hace imprescindible potenciar soluciones interoperables que permitan acceder a la información del mundo real a través de procedimientos homogéneos. Esta problemática coincide perfectamente con los escenarios de \Computación Ubicua" e \Internet de las Cosas", donde se apunta a un futuro en el que los objetos que nos rodean serán capaces de obtener información del entorno y comunicarla a otros objetos y personas. Los sistemas interactivos, al ser capaces de obtener información de su entorno a través de la interacción con el usuario, pueden tomar un papel especial en este escenario tanto como consumidores como productores de información. En esta Tesis se ha abordado la integración de ambos campos teniendo en cuenta este escenario tecnológico. Para ello, en primer lugar se ha realizado un an álisis de las iniciativas más importantes para la definición y diseño de sistemas interactivos, y de las principales infraestructuras de suministro de información. Mediante este estudio se ha propuesto utilizar el lenguaje SCXML del W3C para el diseño de los sistemas interactivos y el procesamiento de los datos proporcionados por fuentes de contexto. Así, se ha reflejado cómo las capacidades del lenguaje SCXML para combinar información de diferentes modalidades pueden también utilizarse para procesar e integrar información contextual de diferentes fuentes heterogéneas, y por consiguiente diseñar sistemas de interacción sensibles al contexto. Del mismo modo se presenta a la iniciativa Sensor Web, y a su extensión semántica Semantic Sensor Web, como una iniciativa idónea para permitir un acceso y suministro homogéneo de la información a los sistemas interactivos sensibles al contexto. Posteriormente se han analizado los retos que plantea la integración de ambos tipos de iniciativas. Como resultado se ha conseguido establecer una serie de funcionalidades que son necesarias implementar para llevar a cabo esta integración. Utilizando tecnologías que aportan una gran flexibilidad al proceso de implementación y que se apoyan en recomendaciones y estándares actuales, se implementaron una serie de desarrollos experimentales que integraban las funcionalidades identificadas anteriormente. Finalmente, con el fin de validar nuestra propuesta, se realizaron un conjunto de experimentos sobre un entorno de experimentación que simula el escenario de la conducción. En este escenario un sistema interactivo se comunica con una extensión semántica de una plataforma basada en los estándares de la Sensor Web para poder obtener información y publicar las observaciones que el usuario realizaba al sistema. Los resultados obtenidos han demostrado la viabilidad de utilizar el lenguaje SCXML para el diseño de sistemas interactivos sensibles al contexto que requieren acceder a plataformas avanzadas de información para consumir y publicar información a la vez que interaccionan con el usuario. Del mismo modo, se ha demostrado cómo la utilización de tecnologías semánticas en los procesos de consulta y publicación de información puede facilitar la reutilización de la información publicada en infraestructuras Sensor Web por cualquier tipo de aplicación, y de este modo contribuir al futuro escenario de Internet de las Cosas. ABSTRACT In this Thesis, we have addressed the difficulties related to the development of context-aware human-machine interaction systems. This issue is part of two research fields: interactive systems and contextual information sources. Traditionally both fields have been integrated through domain-specific vertical solutions that allow interactive systems to access contextual information without having to deal with low-level procedures, but restricting their interoperability with other applications and heterogeneous data sources. Thus, it is essential to boost the research on interoperable solutions that provide access to real world information through homogeneous procedures. This issue perfectly matches with the scenarios of \Ubiquitous Computing" and \Internet of Things", which point toward a future in which many objects around us will be able to acquire meaningful information about the environment and communicate it to other objects and to people. Since interactive systems are able to get information from their environment through interaction with the user, they can play an important role in this scenario as they can both consume real-world data and produce enriched information. This Thesis deals with the integration of both fields considering this technological scenario. In order to do this, we first carried out an analysis of the most important initiatives for the definition and design of interactive systems, and the main infrastructures for providing information. Through this study the use of the W3C SCXML language is proposed for both the design of interactive systems and the processing of data provided by different context sources. Thus, this work has shown how the SCXML capabilities for combining information from different modalities can also be used to process and integrate contextual information from different heterogeneous sensor sources, and therefore to develope context-aware interaction systems. Similarly, we present the Sensor Web initiative, and its semantic extension Semantic Sensor Web, as an appropriate initiative to allow uniform access and delivery of information to the context-aware interactive systems. Subsequently we have analyzed the challenges of integrating both types of initiatives: SCXML and (Semantic) Sensor Web. As a result, we state a number of functionalities that are necessary to implement in order to perform this integration. By using technologies that provide exibility to the implementation process and are based on current recommendations and standards, we implemented a series of experimental developments that integrate the identified functionalities. Finally, in order to validate our approach, we conducted different experiments with a testing environment simulating a driving scenario. In this framework an interactive system can access a semantic extension of a Telco plataform, based on the standards of the Sensor Web, to acquire contextual information and publish observations that the user performed to the system. The results showed the feasibility of using the SCXML language for designing context-aware interactive systems that require access to advanced sensor platforms for consuming and publishing information while interacting with the user. In the same way, it was shown how the use of semantic technologies in the processes of querying and publication sensor data can assist in reusing and sharing the information published by any application in Sensor Web infrastructures, and thus contribute to realize the future scenario of \Internet of Things".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los métodos de detección rápida de microorganismos se están convirtiendo en una herramienta esencial para el control de calidad en el área de la biotecnología, como es el caso de las industrias de alimentos y productos farmacéuticos y bioquímicos. En este escenario, el objetivo de esta tesis doctoral es desarrollar una técnica de inspección rápida de microoganismos basada en ultrasonidos. La hipótesis propuesta es que la combinación de un dispositivo ultrasónico de medida y un medio líquido diseñado específicamente para producir y atrapar burbujas, pueden constituir la base de un método sensible y rápido de detección de contaminaciones microbianas. La técnica presentada es efectiva para bacterias catalasa-positivas y se basa en la hidrólisis del peróxido de hidrógeno inducida por la catalasa. El resultado de esta reacción es un medio con una creciente concentración de burbujas. Tal medio ha sido estudiado y modelado desde el punto de vista de la propagación ultrasónica. Las propiedades deducidas a partir del análisis cinemático de la enzima se han utilizado para evaluar el método como técnica de inspección microbiana. En esta tesis, se han investigado aspectos teóricos y experimentales de la hidrólisis del peróxido de hidrógeno. Ello ha permitido describir cuantitativamente y comprender el fenómeno de la detección de microorganismos catalasa-positivos mediante la medida de parámetros ultrasónicos. Más concretamente, los experimentos realizados muestran cómo el oxígeno que aparece en forma de burbujas queda atrapado mediante el uso de un gel sobre base de agar. Este gel fue diseñado y preparado especialmente para esta aplicación. A lo largo del proceso de hidrólisis del peróxido de hidrógeno, se midió la atenuación de la onda y el “backscattering” producidos por las burbujas, utilizando una técnica de pulso-eco. Ha sido posible detectar una actividad de la catalasa de hasta 0.001 unidades/ml. Por otra parte, este estudio muestra que por medio del método propuesto, se puede lograr una detección microbiana para concentraciones de 105 células/ml en un periodo de tiempo corto, del orden de unos pocos minutos. Estos resultados suponen una mejora significativa de tres órdenes de magnitud en comparación con otros métodos de detección por ultrasonidos. Además, la sensibilidad es competitiva con modernos y rápidos métodos microbiológicos como la detección de ATP por bioluminiscencia. Pero sobre todo, este trabajo muestra una metodología para el desarrollo de nuevas técnicas de detección rápida de bacterias basadas en ultrasonidos. ABSTRACT In an industrial scenario where rapid microbiological methods are becoming essential tools for quality control in the biotechnological area such as food, pharmaceutical and biochemical; the objective of the work presented in this doctoral thesis is to develop a rapid microorganism inspection technique based on ultrasounds. It is proposed that the combination of an ultrasonic measuring device with a specially designed liquid medium, able to produce and trap bubbles could constitute the basis of a sensitive and rapid detection method for microbial contaminations. The proposed technique is effective on catalase positive microorganisms. Well-known catalase induced hydrogen peroxide hydrolysis is the fundamental of the developed method. The physical consequence of the catalase induced hydrogen peroxide hydrolysis is an increasingly bubbly liquid medium. Such medium has been studied and modeled from the point of view of ultrasonic propagation. Properties deduced from enzyme kinematics analysis have been extrapolated to investigate the method as a microbial inspection technique. In this thesis, theoretical and experimental aspects of the hydrogen peroxide hydrolysis were analyzed in order to quantitatively describe and understand the catalase positive microorganism detection by means of ultrasonic measurements. More concretely, experiments performed show how the produced oxygen in form of bubbles is trapped using the new gel medium based on agar, which was specially designed for this application. Ultrasonic attenuation and backscattering is measured in this medium using a pulse-echo technique along the hydrogen peroxide hydrolysis process. Catalase enzymatic activity was detected down to 0.001 units/ml. Moreover, this study shows that by means of the proposed method, microbial detection can be achieved down to 105 cells/ml in a short time period of the order of few minutes. These results suppose a significant improvement of three orders of magnitude compared to other ultrasonic detection methods for microorganisms. In addition, the sensitivity reached is competitive with modern rapid microbiological methods such as ATP detection by bioluminescence. But above all, this work points out a way to proceed for developing new rapid microbial detection techniques based on ultrasound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On 22nd February '96, the space mission STS 75 started ,from the NASA facilities at Cape Canaveral. Such a mission consists in the launch of the shuttle Columbia in order to carry out two experiments in the space: the TSS 1R (Tethered Satellite Sistem 1 Refliight) and the USMP (United States Microgravity Payload). The TSS 1R is a replica of a similar mission TSS 1 '92. The TSS space programme is a bilateral scientific cooperation between the USA space agency NASA (National Aeronautics and Space Agency) and the ASI (Italian Space Agency. The TSS 1R system consists on the shuttle Columbia which deploys, up-ward, by means a conducting tether 20 km long, a spherical satellite (1.5 mt diameter) containing scientific instrumentation. This system, orbiting at about 300 km from the Earth's surface, represents, presently, the largest experimental space structure, Due to its dimensions, flexibility and conducting properties of the tether, the system interacts, in a quite complex manner, wih the earth magnetic field and the ionospheric plasma, in a way that the total system behaves as an electromagnetic radiating antenna as well as an electric power generator. Twelve scientific experiments have been assessed by US and Italian scientists in order to study the electro dynamic behaviour of the structure orbiting in the ionos phere. Two experiments have been prepared in the attempt to receive on the Earth's surface possible electromagnetic events radiated by the TSS 1R. The project EMET (Electro Magnetic Emissions from Tether),USA and the project OESEE (Observations on the Earth Surface of Electromagnetic Emissions) Italy, consist in a coordinated programme of passive detection of such possible EM emissions. This detection will supply the verification of some thoretical hypotheses on the electrodynamic interactions between the orbiting system, the Earth's magnetic field and the ionospheric plasma with two principal aims as the technological assesment of the system concept as well as a deeper knowledge of the ionosphere properties for future space applications. A theoretical model that keeps the peculiarities of tether emissionsis being developed for signal prediction at constant tether current. As a step previous to the calculation of the expected ground signal , the Alfven-wave signature left by the tether far back in the ionosphere has been determined. The scientific expectations from the combined effort to measure the entity of those perturbations will be outlined taking in to account the used ground track sensor systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The era of the seed-cast grown monocrystalline-based silicon ingots is coming. Mono-like, pseudomono or quasimono wafers are product labels that can be nowadays found in the market, as a critical innovation for the photovoltaic industry. They integrate some of the most favorable features of the conventional silicon substrates for solar cells, so far, such as the high solar cell efficiency offered by the monocrystalline Czochralski-Si (Cz-Si) wafers and the lower cost, high productivity and full square-shape that characterize the well-known multicrystalline casting growth method. Nevertheless, this innovative crystal growth approach still faces a number of mass scale problems that need to be resolved, in order to gain a deep, 100% reliable and worldwide market: (i) extended defects formation during the growth process; (ii) optimization of the seed recycling; and (iii) parts of the ingots giving low solar cells performance, which directly affect the production costs and yield of this approach. Therefore, this paper presents a series of casting crystal growth experiments and characterization studies from ingots, wafers and cells manufactured in an industrial approach, showing the main sources of crystal defect formation, impurity enrichment and potential consequences at solar cell level. The previously mentioned technological drawbacks are directly addressed, proposing industrial actions to pave the way of this new wafer technology to high efficiency solar cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During seed germination, the endosperm cell walls (CWs) suffer an important weakening process mainly driven by hydrolytic enzymes, such are endo-?- mannanases (MAN; EC. 3.2.1.78) that catalyze the cleavage of ?1?4 bonds in the mannan-polymers. In Arabidopsis thaliana seeds, endo-?-mannanase activity increases during seed imbibition, decreasing after radicle emergence1. AtMAN7 is the most highly expressed MAN gene in seeds upon germination and their transcripts are restricted to the micropylar endosperm and to the radicle tip just before radicle emergence. Mutants with a T-DNA insertion in this gene (K.O. MAN7) have a slower germination rate than the wild type (t50=34 h versus t50=25 h). To gain insight into the transcriptional regulation of the AtMAN7 gene, a bioinformatic search for conserved non-coding cis-elements (phylogenetic shadowing) within the Brassicaceae orthologous MAN7 gene promoters has been done and these conserved motives have been used as baits to look for their interacting transcription factors (TFs), using as a prey an arrayed yeast library of circa 1,200 TFs from A. thaliana. The basic leucine zipper AtbZIP44, but not its closely related ortholog AtbZIP11, has been thus identified and its regulatory function upon AtMAN7 during seed germination validated by different molecular and physiological techniques, such are RT-qPCR analyses, mRNA Fluorescence in situ Hybridization (FISH) experiments, and by the establishment of the germination kinetics of both over-expression (oex) lines and TDNA insertion mutants in AtbZIP44. The transcriptional combinatorial network through which AtbZIP44 regulates AtMAN7 gene expression during seed germination has been further explored through protein-protein interactions between AtbZIP44 and other bZIP members. In such a way, AtbZIP9 has been identified by yeast two-hybrid experiments and its physiological implication in the control of AtMAN7 expression similarly established.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La segmentación de imágenes es un campo importante de la visión computacional y una de las áreas de investigación más activas, con aplicaciones en comprensión de imágenes, detección de objetos, reconocimiento facial, vigilancia de vídeo o procesamiento de imagen médica. La segmentación de imágenes es un problema difícil en general, pero especialmente en entornos científicos y biomédicos, donde las técnicas de adquisición imagen proporcionan imágenes ruidosas. Además, en muchos de estos casos se necesita una precisión casi perfecta. En esta tesis, revisamos y comparamos primero algunas de las técnicas ampliamente usadas para la segmentación de imágenes médicas. Estas técnicas usan clasificadores a nivel de pixel e introducen regularización sobre pares de píxeles que es normalmente insuficiente. Estudiamos las dificultades que presentan para capturar la información de alto nivel sobre los objetos a segmentar. Esta deficiencia da lugar a detecciones erróneas, bordes irregulares, configuraciones con topología errónea y formas inválidas. Para solucionar estos problemas, proponemos un nuevo método de regularización de alto nivel que aprende información topológica y de forma a partir de los datos de entrenamiento de una forma no paramétrica usando potenciales de orden superior. Los potenciales de orden superior se están popularizando en visión por computador, pero la representación exacta de un potencial de orden superior definido sobre muchas variables es computacionalmente inviable. Usamos una representación compacta de los potenciales basada en un conjunto finito de patrones aprendidos de los datos de entrenamiento que, a su vez, depende de las observaciones. Gracias a esta representación, los potenciales de orden superior pueden ser convertidos a potenciales de orden 2 con algunas variables auxiliares añadidas. Experimentos con imágenes reales y sintéticas confirman que nuestro modelo soluciona los errores de aproximaciones más débiles. Incluso con una regularización de alto nivel, una precisión exacta es inalcanzable, y se requeire de edición manual de los resultados de la segmentación automática. La edición manual es tediosa y pesada, y cualquier herramienta de ayuda es muy apreciada. Estas herramientas necesitan ser precisas, pero también lo suficientemente rápidas para ser usadas de forma interactiva. Los contornos activos son una buena solución: son buenos para detecciones precisas de fronteras y, en lugar de buscar una solución global, proporcionan un ajuste fino a resultados que ya existían previamente. Sin embargo, requieren una representación implícita que les permita trabajar con cambios topológicos del contorno, y esto da lugar a ecuaciones en derivadas parciales (EDP) que son costosas de resolver computacionalmente y pueden presentar problemas de estabilidad numérica. Presentamos una aproximación morfológica a la evolución de contornos basada en un nuevo operador morfológico de curvatura que es válido para superficies de cualquier dimensión. Aproximamos la solución numérica de la EDP de la evolución de contorno mediante la aplicación sucesiva de un conjunto de operadores morfológicos aplicados sobre una función de conjuntos de nivel. Estos operadores son muy rápidos, no sufren de problemas de estabilidad numérica y no degradan la función de los conjuntos de nivel, de modo que no hay necesidad de reinicializarlo. Además, su implementación es mucho más sencilla que la de las EDP, ya que no requieren usar sofisticados algoritmos numéricos. Desde un punto de vista teórico, profundizamos en las conexiones entre operadores morfológicos y diferenciales, e introducimos nuevos resultados en este área. Validamos nuestra aproximación proporcionando una implementación morfológica de los contornos geodésicos activos, los contornos activos sin bordes, y los turbopíxeles. En los experimentos realizados, las implementaciones morfológicas convergen a soluciones equivalentes a aquéllas logradas mediante soluciones numéricas tradicionales, pero con ganancias significativas en simplicidad, velocidad y estabilidad. ABSTRACT Image segmentation is an important field in computer vision and one of its most active research areas, with applications in image understanding, object detection, face recognition, video surveillance or medical image processing. Image segmentation is a challenging problem in general, but especially in the biological and medical image fields, where the imaging techniques usually produce cluttered and noisy images and near-perfect accuracy is required in many cases. In this thesis we first review and compare some standard techniques widely used for medical image segmentation. These techniques use pixel-wise classifiers and introduce weak pairwise regularization which is insufficient in many cases. We study their difficulties to capture high-level structural information about the objects to segment. This deficiency leads to many erroneous detections, ragged boundaries, incorrect topological configurations and wrong shapes. To deal with these problems, we propose a new regularization method that learns shape and topological information from training data in a nonparametric way using high-order potentials. High-order potentials are becoming increasingly popular in computer vision. However, the exact representation of a general higher order potential defined over many variables is computationally infeasible. We use a compact representation of the potentials based on a finite set of patterns learned fromtraining data that, in turn, depends on the observations. Thanks to this representation, high-order potentials can be converted into pairwise potentials with some added auxiliary variables and minimized with tree-reweighted message passing (TRW) and belief propagation (BP) techniques. Both synthetic and real experiments confirm that our model fixes the errors of weaker approaches. Even with high-level regularization, perfect accuracy is still unattainable, and human editing of the segmentation results is necessary. The manual edition is tedious and cumbersome, and tools that assist the user are greatly appreciated. These tools need to be precise, but also fast enough to be used in real-time. Active contours are a good solution: they are good for precise boundary detection and, instead of finding a global solution, they provide a fine tuning to previously existing results. However, they require an implicit representation to deal with topological changes of the contour, and this leads to PDEs that are computationally costly to solve and may present numerical stability issues. We present a morphological approach to contour evolution based on a new curvature morphological operator valid for surfaces of any dimension. We approximate the numerical solution of the contour evolution PDE by the successive application of a set of morphological operators defined on a binary level-set. These operators are very fast, do not suffer numerical stability issues, and do not degrade the level set function, so there is no need to reinitialize it. Moreover, their implementation is much easier than their PDE counterpart, since they do not require the use of sophisticated numerical algorithms. From a theoretical point of view, we delve into the connections between differential andmorphological operators, and introduce novel results in this area. We validate the approach providing amorphological implementation of the geodesic active contours, the active contours without borders, and turbopixels. In the experiments conducted, the morphological implementations converge to solutions equivalent to those achieved by traditional numerical solutions, but with significant gains in simplicity, speed, and stability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La escasez del agua en las regiones áridas y semiáridas se debe a la escasez de precipitaciones y la distribución desigual en toda la temporada, lo que hace de la agricultura de secano una empresa precaria. Un enfoque para mejorar y estabilizar el agua disponible para la producción de cultivos en estas regiones es el uso de tecnologías de captación de agua de lluvia in situ y su conservación. La adopción de los sistemas de conservación de la humedad del suelo in situ, tales como la labranza de conservación, es una de las estrategias para mejorar la gestión de la agricultura en zonas áridas y semiáridas. El objetivo general de esta tesis ha sido desarrollar una metodología de aplicación de labranza de depósito e investigar los efectos a corto plazo sobre las propiedades físicas del suelo de las diferentes prácticas de cultivo que incluyen labranza de depósito: (reservoir tillage, RT), la laboreo mínimo: (minimum tillage, MT), la no laboreo: (zero tillage, ZT) y laboreo convencional: (conventional tillage, CT) Así como, la retención de agua del suelo y el control de la erosión del suelo en las zonas áridas y semiáridas. Como una primera aproximación, se ha realizado una revisión profunda del estado de la técnica, después de la cual, se encontró que la labranza de depósito es un sistema eficaz de cosecha del agua de lluvia y conservación del suelo, pero que no ha sido evaluada científicamente tanto como otros sistemas de labranza. Los trabajos experimentales cubrieron tres condiciones diferentes: experimentos en laboratorio, experimentos de campo en una región árida, y experimentos de campo en una región semiárida. Para investigar y cuantificar el almacenamiento de agua a temperatura ambiente y la forma en que podría adaptarse para mejorar la infiltración del agua de lluvia recolectada y reducir la erosión del suelo, se ha desarrollado un simulador de lluvia a escala de laboratorio. Las características de las lluvias, entre ellas la intensidad de las precipitaciones, la uniformidad espacial y tamaño de la gota de lluvia, confirmaron que las condiciones naturales de precipitación son simuladas con suficiente precisión. El simulador fue controlado automáticamente mediante una válvula de solenoide y tres boquillas de presión que se usaron para rociar agua correspondiente a diferentes intensidades de lluvia. Con el fin de evaluar el método de RT bajo diferentes pendientes de superficie, se utilizaron diferentes dispositivos de pala de suelo para sacar un volumen idéntico para hacer depresiones. Estas depresiones se compararon con una superficie de suelo control sin depresión, y los resultados mostraron que la RT fue capaz de reducir la erosión del suelo y la escorrentía superficial y aumentar significativamente la infiltración. Luego, basándonos en estos resultados, y después de identificar la forma adecuada de las depresiones, se ha diseñado una herramienta combinada (sistema integrado de labranza de depósito (RT)) compuesto por un arado de una sola línea de chisel, una sola línea de grada en diente de pico, sembradora modificada, y rodillo de púas. El equipo fue construido y se utiliza para comparación con MT y CT en un ambiente árido en Egipto. El estudio se realizó para evaluar el impacto de diferentes prácticas de labranza y sus parámetros de funcionamiento a diferentes profundidades de labranza y con distintas velocidades de avance sobre las propiedades físicas del suelo, así como, la pérdida de suelo, régimen de humedad, la eficiencia de recolección de agua, y la productividad de trigo de invierno. Los resultados indicaron que la RT aumentó drásticamente la infiltración, produciendo una tasa que era 47.51% más alta que MT y 64.56% mayor que la CT. Además, los resultados mostraron que los valores más bajos de la escorrentía y pérdidas de suelos 4.91 mm y 0.65 t ha-1, respectivamente, se registraron en la RT, mientras que los valores más altos, 11.36 mm y 1.66 t ha-1, respectivamente, se produjeron en el marco del CT. Además, otros dos experimentos de campo se llevaron a cabo en ambiente semiárido en Madrid con la cebada y el maíz como los principales cultivos. También ha sido estudiado el potencial de la tecnología inalámbrica de sensores para monitorizar el potencial de agua del suelo. Para el experimento en el que se cultivaba la cebada en secano, se realizaron dos prácticas de labranza (RT y MT). Los resultados mostraron que el potencial del agua del suelo aumentó de forma constante y fue consistentemente mayor en MT. Además, con independencia de todo el período de observación, RT redujo el potencial hídrico del suelo en un 43.6, 5.7 y 82.3% respectivamente en comparación con el MT a profundidades de suelo (10, 20 y 30 cm, respectivamente). También se observaron diferencias claras en los componentes del rendimiento de los cultivos y de rendimiento entre los dos sistemas de labranza, el rendimiento de grano (hasta 14%) y la producción de biomasa (hasta 8.8%) se incrementaron en RT. En el experimento donde se cultivó el maíz en regadío, se realizaron cuatro prácticas de labranza (RT, MT, ZT y CT). Los resultados revelaron que ZT y RT tenían el potencial de agua y temperatura del suelo más bajas. En comparación con el tratamiento con CT, ZT y RT disminuyó el potencial hídrico del suelo en un 72 y 23%, respectivamente, a la profundidad del suelo de 40 cm, y provocó la disminución de la temperatura del suelo en 1.1 y un 0.8 0C respectivamente, en la profundidad del suelo de 5 cm y, por otro lado, el ZT tenía la densidad aparente del suelo y resistencia a la penetración más altas, la cual retrasó el crecimiento del maíz y disminuyó el rendimiento de grano que fue del 15.4% menor que el tratamiento con CT. RT aumenta el rendimiento de grano de maíz cerca de 12.8% en comparación con la ZT. Por otra parte, no hubo diferencias significativas entre (RT, MT y CT) sobre el rendimiento del maíz. En resumen, según los resultados de estos experimentos, se puede decir que mediante el uso de la labranza de depósito, consistente en realizar depresiones después de la siembra, las superficies internas de estas depresiones se consolidan de tal manera que el agua se mantiene para filtrarse en el suelo y por lo tanto dan tiempo para aportar humedad a la zona de enraizamiento de las plantas durante un período prolongado de tiempo. La labranza del depósito podría ser utilizada como un método alternativo en regiones áridas y semiáridas dado que retiene la humedad in situ, a través de estructuras que reducen la escorrentía y por lo tanto puede resultar en la mejora de rendimiento de los cultivos. ABSTRACT Water shortage in arid and semi-arid regions stems from low rainfall and uneven distribution throughout the season, which makes rainfed agriculture a precarious enterprise. One approach to enhance and stabilize the water available for crop production in these regions is to use in-situ rainwater harvesting and conservation technologies. Adoption of in-situ soil moisture conservation systems, such as conservation tillage, is one of the strategies for upgrading agriculture management in arid and semi-arid environments. The general aim of this thesis is to develop a methodology to apply reservoir tillage to investigate the short-term effects of different tillage practices including reservoir tillage (RT), minimum tillage (MT), zero tillage (ZT), and conventional tillage (CT) on soil physical properties, as well as, soil water retention, and soil erosion control in arid and semi-arid areas. As a first approach, a review of the state of the art has been done. We found that reservoir tillage is an effective system of harvesting rainwater and conserving soil, but it has not been scientifically evaluated like other tillage systems. Experimental works covered three different conditions: laboratory experiments, field experiments in an arid region, and field experiments in a semi-arid region. To investigate and quantify water storage from RT and how it could be adapted to improve infiltration of harvested rainwater and reduce soil erosion, a laboratory-scale rainfall simulator was developed. Rainfall characteristics, including rainfall intensity, spatial uniformity and raindrop size, confirm that natural rainfall conditions are simulated with sufficient accuracy. The simulator was auto-controlled by a solenoid valve and three pressure nozzles were used to spray water corresponding to different rainfall intensities. In order to assess the RT method under different surface slopes, different soil scooping devices with identical volume were used to create depressions. The performance of the soil with these depressions was compared to a control soil surface (with no depression). Results show that RT was able to reduce soil erosion and surface runoff and significantly increase infiltration. Then, based on these results and after selecting the proper shape of depressions, a combination implement integrated reservoir tillage system (integrated RT) comprised of a single-row chisel plow, single-row spike tooth harrow, modified seeder, and spiked roller was developed and used to compared to MT and CT in an arid environment in Egypt. The field experiments were conducted to evaluate the impact of different tillage practices and their operating parameters at different tillage depths and different forward speeds on the soil physical properties, as well as on runoff, soil losses, moisture regime, water harvesting efficiency, and winter wheat productivity. Results indicated that the integrated RT drastically increased infiltration, producing a rate that was 47.51% higher than MT and 64.56% higher than CT. In addition, results showed that the lowest values of runoff and soil losses, 4.91 mm and 0.65 t ha-1 respectively, were recorded under the integrated RT, while the highest values, 11.36 mm and 1.66 t ha -1 respectively, occurred under the CT. In addition, two field experiments were carried out in semi-arid environment in Madrid with barley and maize as the main crops. For the rainfed barley experiment, two tillage practices (RT, and MT) were performed. Results showed that soil water potential increased quite steadily and were consistently greater in MT and, irrespective of the entire observation period, RT decreased soil water potential by 43.6, 5.7, and 82.3% compared to MT at soil depths (10, 20, and 30 cm, respectively). In addition, clear differences in crop yield and yield components were observed between the two tillage systems, grain yield (up to 14%) and biomass yield (up to 8.8%) were increased by RT. For the irrigated maize experiment, four tillage practices (RT, MT, ZT, and CT) were performed. Results showed that ZT and RT had the lowest soil water potential and soil temperature. Compared to CT treatment, ZT and RT decreased soil water potential by 72 and 23% respectively, at soil depth of 40 cm, and decreased soil temperature by 1.1 and 0.8 0C respectively, at soil depth of 5 cm. Also, ZT had the highest soil bulk density and penetration resistance, which delayed the maize growth and decreased the grain yield that was 15.4% lower than CT treatment. RT increased maize grain yield about 12.8% compared to ZT. On the other hand, no significant differences among (RT, MT, and CT) on maize yield were found. In summary, according to the results from these experiments using reservoir tillage to make depressions after seeding, these depression’s internal surfaces are consolidated in such a way that the water is held to percolate into the soil and thus allowing time to offer moisture to the plant rooting zone over an extended period of time. Reservoir tillage could be used as an alternative method in arid and semi-arid regions and it retains moisture in-situ, through structures that reduce runoff and thus can result in improved crop yields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The heating produced by the absorption of radiofrequency (RF) has been considered a secondary undesirable effect during MRI procedures. In this work, we have measured the power absorbed by distilled water, glycerol and egg-albumin during NMR and non-NMR experiments. The samples are dielectric and examples of different biological materials. The samples were irradiated using the same RF pulse sequence, whilst the magnetic field strength was the variable to be changed in the experiments. The measurements show a smooth increase of the thermal power as the magnetic field grows due to the magnetoresistive effect in the copper antenna, a coil around the probe, which is directly heating the sample. However, in the cases when the magnetic field was the adequate for the NMR to take place, some anomalies in the expected thermal powers were observed: the thermal power was higher in the cases of water and glycerol, and lower in the case of albumin. An ANOVA test demonstrated that the observed differences between the measured power and the expected power are significant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cognitive neuroscience boils down to describing the ways in which cognitive function results from brain activity. In turn, brain activity shows complex fluctuations, with structure at many spatio-temporal scales. Exactly how cognitive function inherits the physical dimensions of neural activity, though, is highly non-trivial, and so are generally the corresponding dimensions of cognitive phenomena. As for any physical phenomenon, when studying cognitive function, the first conceptual step should be that of establishing its dimensions. Here, we provide a systematic presentation of the temporal aspects of task-related brain activity, from the smallest scale of the brain imaging technique's resolution, to the observation time of a given experiment, through the characteristic time scales of the process under study. We first review some standard assumptions on the temporal scales of cognitive function. In spite of their general use, these assumptions hold true to a high degree of approximation for many cognitive (viz. fast perceptual) processes, but have their limitations for other ones (e.g., thinking or reasoning). We define in a rigorous way the temporal quantifiers of cognition at all scales, and illustrate how they qualitatively vary as a function of the properties of the cognitive process under study. We propose that each phenomenon should be approached with its own set of theoretical, methodological and analytical tools. In particular, we show that when treating cognitive processes such as thinking or reasoning, complex properties of ongoing brain activity, which can be drastically simplified when considering fast (e.g., perceptual) processes, start playing a major role, and not only characterize the temporal properties of task-related brain activity, but also determine the conditions for proper observation of the phenomena. Finally, some implications on the design of experiments, data analyses, and the choice of recording parameters are discussed.