931 resultados para Rule-based techniques


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neuroscientists have a variety of perspectives with which to classify different parts of the brain. With the rise of genetic-based techniques such as optogenetics, it is increasingly important to identify whether a group of cells, defined by morphology, function or anatomical location possesses a distinct pattern of expression of one or more genetic promoters. This would allow for better ways to study of these genetically defined subpopulations of neurons. In this work, I present a theoretical discussion and threeexperimental studies in which this was the main question being addressed. Paper I discusses the issues involved in selecting a promoter to study structures and subpopulations in the Ventral Tegmental Area. Paper II characterizes a subpopulation of cells in the Ventral Tegmental Area that shares the expression of a promoter and is anatomically very restricted, and induces aversion when stimulated. Paper III utilizes a similar strategy to investigate a subpopulation in the subthalamic nucleus that expresses PITX2 and VGLUT2 which, when inactivated, causes hyperlocomotion. Paper IV exploits the fact that a previously identified group of cells in the ventral hippocampus expresses CHRNA2, and indicates that this population may be necessary and sufficient for the establishment of the theta rhythm (2-8 Hz) in the Local Field Potential of anesthetized mice. All of these studies were guided by the same strategy of characterizing and studying the role of a genetically defined subpopulation of cells, and they demonstrate the different ways in which this approach can generate new discoveries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents discussions on the teaching of Chemical Bonds in high school and some implications of this approach in learning chemistry by students. In general, understanding how the chemicals combine to form substances and compounds, it is a key point for understanding the properties of substances and their structure. In this sense, the chemical bonds represent an extremely important issue, and their knowledge is essential for a better understanding of the changes occurring in our world. Despite these findings, it is observed that the way in which this concept is discussed in chemistry class has contributed, paradoxically, to the emergence of several alternative designs, making the understanding of the subject by students. It is believed that one of the explanations for these observations is the exclusive use of the "octet rule" as an explanatory model for the Chemical Bonds. The use of such a model over time eventually replace chemical principles that gave rise to it, transforming knowledge into a series of uninteresting rituals and even confusing for students. Based on these findings, it is deemed necessary a reformulation in the way to approach this content in the classroom, taking into account especially the fact that the explanations of the formation of substances should be based on the energy concept, which is fundamental to understanding how atoms combine. Thus, the main question of the survey and described here of the following question: Can the development of an explanatory model for the Chemical Bonds in high school based on the concept of energy and without the need to use the "octet rule"? Based on the concepts and methodologies of modeling activity, we sought the development of a teaching model was made through Teaching Units designed to give subsidies to high school teachers to address the chemical bonds through the concept of energy. Through this work it is intended to make the process of teaching and learning of Chemical Bonds content becomes more meaningful to students, developing models that contribute to the learning of this and hence other basic fundamentals of chemistry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents discussions on the teaching of Chemical Bonds in high school and some implications of this approach in learning chemistry by students. In general, understanding how the chemicals combine to form substances and compounds, it is a key point for understanding the properties of substances and their structure. In this sense, the chemical bonds represent an extremely important issue, and their knowledge is essential for a better understanding of the changes occurring in our world. Despite these findings, it is observed that the way in which this concept is discussed in chemistry class has contributed, paradoxically, to the emergence of several alternative designs, making the understanding of the subject by students. It is believed that one of the explanations for these observations is the exclusive use of the "octet rule" as an explanatory model for the Chemical Bonds. The use of such a model over time eventually replace chemical principles that gave rise to it, transforming knowledge into a series of uninteresting rituals and even confusing for students. Based on these findings, it is deemed necessary a reformulation in the way to approach this content in the classroom, taking into account especially the fact that the explanations of the formation of substances should be based on the energy concept, which is fundamental to understanding how atoms combine. Thus, the main question of the survey and described here of the following question: Can the development of an explanatory model for the Chemical Bonds in high school based on the concept of energy and without the need to use the "octet rule"? Based on the concepts and methodologies of modeling activity, we sought the development of a teaching model was made through Teaching Units designed to give subsidies to high school teachers to address the chemical bonds through the concept of energy. Through this work it is intended to make the process of teaching and learning of Chemical Bonds content becomes more meaningful to students, developing models that contribute to the learning of this and hence other basic fundamentals of chemistry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Harmful algal blooms (HABs) are becoming more frequent as climate changes, with tropical species moving northward. Monitoring programs detecting the presence of toxic algae before they bloom are of paramount importance to protect aquatic ecosystems, aquaculture, human health and local economies. Rapid and reliable species identification methods using molecular barcodes coupled to biosensor detection tools have received increasing attention over the past decade as an alternative to the impractical standard microscopic counting-based techniques. This work reports on a PCR amplification-free electrochemical genosensor for the enhanced selective and sensitive detection of RNA from multiple Mediterranean toxic algal species. For a sandwich hybridization (SHA), we designed longer capture and signal probes for more specific target discrimination against a single base-pair mismatch from closely related species and for reproducible signals. We optimized experimental conditions, viz., minimal probe concentration in the SHA on a screen-printed gold electrode and selected the best electrochemical mediator. Probes from 13 Mediterranean dinoflagellate species were tested under optimized conditions and the format further tested for quantification of RNA from environmental samples. We not only enhanced the selectivity and sensitivity of the state-of-the-art toxic algal genosensors but also increased the repertoire of toxic algal biosensors in the Mediterranean, towards an integral and automatic monitoring system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Harmful algal blooms (HABs) are becoming more frequent as climate changes, with tropical species moving northward. Monitoring programs detecting the presence of toxic algae before they bloom are of paramount importance to protect aquatic ecosystems, aquaculture, human health and local economies. Rapid and reliable species identification methods using molecular barcodes coupled to biosensor detection tools have received increasing attention over the past decade as an alternative to the impractical standard microscopic counting-based techniques. This work reports on a PCR amplification-free electrochemical genosensor for the enhanced selective and sensitive detection of RNA from multiple Mediterranean toxic algal species. For a sandwich hybridization (SHA), we designed longer capture and signal probes for more specific target discrimination against a single base-pair mismatch from closely related species and for reproducible signals. We optimized experimental conditions, viz., minimal probe concentration in the SHA on a screen-printed gold electrode and selected the best electrochemical mediator. Probes from 13 Mediterranean dinoflagellate species were tested under optimized conditions and the format further tested for quantification of RNA from environmental samples. We not only enhanced the selectivity and sensitivity of the state-of-the-art toxic algal genosensors but also increased the repertoire of toxic algal biosensors in the Mediterranean, towards an integral and automatic monitoring system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Realization that hard coastal infrastructures support lower biodiversity than natural habitats has prompted a wealth of research seeking to identify design enhancements offering ecological benefits. Some studies showed that artificial structures could be modified to increase levels of diversity. Most studies, however, only considered the short-term ecological effects of such modifications, even though reliance on results from short-term studies may lead to serious misjudgements in conservation. In this study, a seven-year experiment examined how the addition of small pits to otherwise featureless seawalls may enhance the stocks of a highly-exploited limpet. Modified areas of the seawall supported enhanced stocks of limpets seven years after the addition of pits. Modified areas of the seawall also supported a community that differed in the abundance of littorinids, barnacles and macroalgae compared to the controls. Responses to different treatments (numbers and size of pits) were species-specific and, while some species responded directly to differences among treatments, others might have responded indirectly via changes in the distribution of competing species. This type of habitat enhancement can have positive long-lasting effects on the ecology of urban seascapes. Understanding of species interactions could be used to develop a rule-based approach to enhance biodiversity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Realization that hard coastal infrastructures support lower biodiversity than natural habitats has prompted a wealth of research seeking to identify design enhancements offering ecological benefits. Some studies showed that artificial structures could be modified to increase levels of diversity. Most studies, however, only considered the short-term ecological effects of such modifications, even though reliance on results from short-term studies may lead to serious misjudgements in conservation. In this study, a seven-year experiment examined how the addition of small pits to otherwise featureless seawalls may enhance the stocks of a highly-exploited limpet. Modified areas of the seawall supported enhanced stocks of limpets seven years after the addition of pits. Modified areas of the seawall also supported a community that differed in the abundance of littorinids, barnacles and macroalgae compared to the controls. Responses to different treatments (numbers and size of pits) were species-specific and, while some species responded directly to differences among treatments, others might have responded indirectly via changes in the distribution of competing species. This type of habitat enhancement can have positive long-lasting effects on the ecology of urban seascapes. Understanding of species interactions could be used to develop a rule-based approach to enhance biodiversity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Smartphones have undergone a remarkable evolution over the last few years, from simple calling devices to full fledged computing devices where multiple services and applications run concurrently. Unfortunately, battery capacity increases at much slower pace, resulting as a main bottleneck for Internet connected smartphones. Several software-based techniques have been proposed in the literature for improving the battery life. Most common techniques include data compression, packet aggregation or batch scheduling, offloading partial computations to cloud, switching OFF interfaces (e.g., WiFi or 3G/4G) periodically for short intervals etc. However, there has been no focus on eliminating the energy waste of background applications that extensively utilize smartphone resources such as CPU, memory, GPS, WiFi, 3G/4G data connection etc. In this paper, we propose an Application State Proxy (ASP) that suppresses/stops the applications on smartphones and maintains their presence on any other network device. The applications are resumed/restarted on smartphones only in case of any event, such as a new message arrival. In this paper, we present the key requirements for the ASP service and different possible architectural designs. In short, the ASP concept can significantly improve the battery life of smartphones, by reducing to maximum extent the usage of its resources due to background applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Polymer solar cells are promising in that they are inexpensive to produce, and due to their mechanical flexibility have the potential for use in applications not possible for more traditional types of solar cells. The performance of polymer solar cells depends strongly on the distribution of electron donor and acceptor material in the active layer. Understanding the connection between morphology and performance as well as how to control the morphology, is therefore of great importance. Furthermore, improving the lifetime of polymer solar cells has become at least as important as improving the efficiency.   In this thesis, the relation between morphology and solar cell performance is studied, and the material stability for blend films of the thiophene-quinoxaline copolymer TQ1 and the fullerene derivatives PCBM and PC70BM. Atomic force microscopy (AFM) and scanning transmission X-ray microscopy (STXM) are used to investigate the lateral morphology, secondary ion mass spectrometry (SIMS) to measure the vertical morphology and near-edge X-ray absorption fine structure (NEXAFS) spectroscopy to determine the surface composition. Lateral phase-separated domains are observed whose size is correlated to the solar cell performance, while the observed TQ1 surface enrichment does not affect the performance. Changes to the unoccupied molecular orbitals as a result of illumination in ambient air are observed by NEXAFS spectroscopy for PCBM, but not for TQ1. The NEXAFS spectrum of PCBM in a blend with TQ1 changes more than that of pristine PCBM. Solar cells in which the active layer has been illuminated in air prior to the deposition of the top electrode exhibit greatly reduced electrical performance. The valence band and absorption spectrum of TQ1 is affected by illumination in air, but the effects are not large enough to account for losses in solar cell performance, which are mainly attributed to PCBM degradation at the active layer surface.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although the value of primary forests for biodiversity conservation is well known, the potential biodiversity and conservation value of regenerating forests remains controversial. Many factors likely contribute to this, including: 1. the variable ages of regenerating forests being studied (often dominated by relatively young regenerating forests); 2. the potential for confounding on-going human disturbance (such as logging and hunting); 3. the relatively low number of multi-taxa studies; 4. the lack of studies that directly compare different historic disturbances within the same location; 5. contrasting patterns from different survey methodologies and the paucity of knowledge on the impacts across different vertical levels of rainforest biodiversity (often due to a lack of suitable methodologies available to assess them). We also know relatively little as to how biodiversity is affected by major current impacts, such as unmarked rainforest roads, which contribute to this degradation of habitat and fragmentation. This thesis explores the potential biodiversity value of regenerating rainforests under the best of scenarios and seeks to understand more about the impact of current human disturbance to biodiversity; data comes from case studies from the Manu and Sumaco Biosphere Reserves in the Western Amazon. Specifically, I compare overall biodiversity and conservation value of a best case regenerating rainforest site with a selection of well-studied primary forest sites and with predicted species lists for the region; including a focus on species of key conservation concern. I then investigate the biodiversity of the same study site in reference to different types of historic anthropogenic disturbance. Following this I investigate the impacts to biodiversity from an unmarked rainforest road. In order to understand more about the differential effects of habitat disturbance on arboreal diversity I directly assess how patterns of butterfly biodiversity vary between three vertical strata. Although assessments within the canopy have been made for birds, invertebrates and bats, very few studies have successfully targeted arboreal mammals. I therefore investigate the potential of camera traps for inventorying arboreal mammal species in comparison with traditional methodologies. Finally, in order to investigate the possibility that different survey methodologies might identify different biodiversity patterns in habitat disturbance assessments, I investigate whether two different but commonly used survey methodologies used to assess amphibians, indicate the same or different responses of amphibian biodiversity to historic habitat change by people. The regenerating rainforest study site contained high levels of species richness; both in terms of alpha diversity found in nearby primary forest areas (87% ±3.5) and in terms of predicted primary forest diversity from the region (83% ±6.7). This included 89% (39 out of 44) of the species of high conservation concern predicted for the Manu region. Faunal species richness in once completely cleared regenerating forest was on average 13% (±9.8) lower than historically selectively logged forest. The presence of the small unmarked road significantly altered levels of faunal biodiversity for three taxa, up to and potentially beyond 350m into the forest interior. Most notably, the impact on biodiversity extended to at least 32% of the whole reserve area. The assessment of butterflies across strata showed that different vertical zones within the same rainforest responded differently in areas with different historic human disturbance. A comparison between forest regenerating after selective logging and forest regenerating after complete clearance, showed that there was a 17% greater reduction in canopy species richness in the historically cleared forest compared with the terrestrial community. Comparing arboreal camera traps with traditional ground-based techniques suggests that camera traps are an effective tool for inventorying secretive arboreal rainforest mammal communities and detect a higher number of cryptic species. Finally, the two survey methodologies used to assess amphibian communities identified contrasting biodiversity patterns in a human modified rainforest; one indicated biodiversity differences between forests with different human disturbance histories, whereas the other suggested no differences between forest disturbance types. Overall, in this thesis I find that the conservation and biodiversity value of regenerating and human disturbed tropical forest can potentially contribute to rainforest biodiversity conservation, particularly in the best of circumstances. I also highlight the importance of utilising appropriate study methodologies that to investigate these three-dimensional habitats, and contribute to the development of methodologies to do so. However, care should be taken when using different survey methodologies, which can provide contrasting biodiversity patterns in response to human disturbance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJETIVO. La presente investigación pretendió determinar complicaciones posnatales en los embarazos gemelares Monocorial y Bianmiotico en mujeres de 15 a 45 años del Hospital Homero Castanier Crespo de la ciudad de Azogues. MATERIAL Y MÉTODOS. Es una investigación cuantitativa y retrospectiva, se trabajó con una muestra de 41 historias clínicas, se utilizó un formulario elaborado y validado por las autoras. La fuente información fue secundaria mediante la revisión de archivos estadísticos y registros estadísticos de los embarazos gemelares que acudieron al Hospital Homero Castanier Crespo. La información fue procesada en el programa estadístico SPSS versión 1.5 y los resultados son presentados en tablas simples de frecuencias y porcentajes. RESULTADOS En la investigación se enconcontrò un 12.1 % del 100% presenta un diagnóstico de preclampsia ,en cuanto a la instrucción tenemos un 36.6% dando lugar 15 usuarias en cuanto al estado civil tenemos un porcentaje del 70.7% dando lugar a 29 usuarias el lugar de residencia tenemos el 56.1% que equivale a 23 usuarias en la placenta amniótico tenemos 65.9% con un total de 27 usuarias en cuanto los pesos de los Recién nacidos tenemos de bajo peso de 62.2% que equivale 51 en cuanto al Apgar tenemos 95.1%. CONCLUSIÓN. La investigación permitió determinar complicaciones posnatales en los embarazos gemelares Monocorial y Bianmiotico en mujeres de 15 a 45 años, verificamos que el gemelo número dos nace con bajo peso ya que el gemelo uno recibe todos los beneficios durante la gestación también se encontró hiperbilirrubinemia mas SDR.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Inter-subject parcellation of functional Magnetic Resonance Imaging (fMRI) data based on a standard General Linear Model (GLM) and spectral clustering was recently proposed as a means to alleviate the issues associated with spatial normalization in fMRI. However, for all its appeal, a GLM-based parcellation approach introduces its own biases, in the form of a priori knowledge about the shape of Hemodynamic Response Function (HRF) and task-related signal changes, or about the subject behaviour during the task. In this paper, we introduce a data-driven version of the spectral clustering parcellation, based on Independent Component Analysis (ICA) and Partial Least Squares (PLS) instead of the GLM. First, a number of independent components are automatically selected. Seed voxels are then obtained from the associated ICA maps and we compute the PLS latent variables between the fMRI signal of the seed voxels (which covers regional variations of the HRF) and the principal components of the signal across all voxels. Finally, we parcellate all subjects data with a spectral clustering of the PLS latent variables. We present results of the application of the proposed method on both single-subject and multi-subject fMRI datasets. Preliminary experimental results, evaluated with intra-parcel variance of GLM t-values and PLS derived t-values, indicate that this data-driven approach offers improvement in terms of parcellation accuracy over GLM based techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2014

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Food bought at supermarkets in, for instance, North America or the European Union, give comprehensive information about ingredients and allergens. Meanwhile, the menus of restaurants are usually incomplete and cannot be normally completed by the waiter. This is specially important when traveling to countries with a di erent culture. A curious example is "calamares en su tinta" (squid in its own ink), a common dish in Spain. Its brief description would be "squid with boiled rice in its own (black) ink", but an ingredient of its sauce is flour, a fact very important for celiacs. There are constraints based on religious believes, due to food allergies or to illnesses, while others just derive from personal preferences. Another complicated situation arise in hospitals, where the doctors' nutritional recommendations have to be added to the patient's usual constraints. We have therefore designed and developed a Rule Based Expert System (RBES) that can address these problems. The rules derive directly from the recipes of the di fferent dishes and contain the information about the required ingredients and ways of cooking. In fact, we distinguish: ingredients and ways of cooking, intermediate products (like sauces, that aren't always made explicit) and final products (the dishes listed in the menu of the restaurant). For a certain restaurant, customer and instant, the input to the RBES are: actualized stock of ingredients and personal characteristics of that customer. The RBES then prepares a "personalized menu" using set operations and knowledge extraction (thanks to an algebraic inference engine [1]). The RBES has been implemented in the computer algebra system MapleTM2015. A rst version of this work was presented at "Applications of Computer Algebra 2015" (ACA'2015) conference. The corresponding abstract is available at [2].