60 resultados para Effectivity


Relevância:

10.00% 10.00%

Publicador:

Resumo:

El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Una de las características de la cartografía y SIG Participativos (SIGP) es incluir en sus métodos a la sociedad civil para aportar contenidos cualitativos a la información de sus territorios. Sin embargo no sólo se trata de datos, sino de los efectos que pueden tener estas prácticas sobre el territorio y su sociedad. El acceso a esa información se ve reducida en contraste con el incremento de información difundida a través de servicios de visualización, geoinformación y cartografía on-line. Todo esto hace que sea necesario el análisis del alcance real de las metodologías participativas en el uso de Información Geográfica (IG) y la comparación desde distintos contextos geográficos. También es importante conocer los beneficios e inconvenientes del acceso a la información para el planeamiento; desde la visibilidad de muchos pueblos desapercibidos en zonas rurales y periféricas, hasta la influencia en programas de gobierno sobre la gestión del territorio pasando por el conocimiento local espacial. El análisis se centró en los niveles de participación de la sociedad civil y sus grados de accesibilidad a la información (acceso y uso), dentro del estudio de los SIGP, Participatory Mapping, además se estudió de los TIG (Tecnologías de Información Geográfica), cartografías on-line (geoweb) y plataformas de geovisualización espacial, como recursos de Neocartografía. En este sentido, se realizó un trabajo de campo de cartografía participativa en Bolivia, se evaluaron distintos proyectos SIGP en países del norte y sur (comparativa de contextos en países en desarrollo) y se analizaron los resultados del cruce de las distintas variables.(validación, accesibilidad, verificación de datos, valor en la planificación e identidad) La tesis considera que ambos factores (niveles de participación y grado de accesibilidad) afectan a la (i) validación, verificación y calidad de los datos, la (ii) valor analítico en la planificación, y al (iii) modelo de identidad de un lugar, y que al ser tratados de forma integral, constituyen el valor añadido que los SIGP pueden aportar para lograr una planificación efectiva. Asimismo se comprueba, que la dimensión participativa en los SIGP varía según el contexto, la centralización de sus actores e intereses sectoriales. La información resultante de las prácticas SIGP tiende a estar restringida por la falta de legislaciones y por la ausencia de formatos estándar, que limitan la difusión e intercambio de la información. Todo esto repercute en la efectividad de una planificación estratégica y en la viabilidad de la implementación de cualquier proyecto sobre el territorio, y en consecuencia sobre los niveles de desarrollo de un país. Se confirma la hipótesis de que todos los elementos citados en los SIGP y mapeo participativo actuarán como herramientas válidas para el fortalecimiento y la eficacia en la planificación sólo si están interconectadas y vinculadas entre sí. Se plantea una propuesta metodológica ante las formas convencionales de planificación (nueva ruta del planeamiento; que incluye el intercambio de recursos y determinación participativa local antes de establecer la implementación), con ello, se logra incorporar los beneficios de las metodologías participativas en el manejo de la IG y los SIG (Sistemas de Información Geográfica) como instrumentos estratégicos para el desarrollo de la identidad local y la optimización en los procesos de planeamiento y estudios del territorio. Por último, se fomenta que en futuras líneas de trabajo los mapas de los SIGP y la cartografía participativa puedan llegar a ser instrumentos visuales representativos que transfieran valores identitarios del territorio y de su sociedad, y de esta manera, ayudar a alcanzar un mayor conocimiento, reconocimiento y valoración de los territorios para sus habitantes y sus planificadores. ABSTRACT A feature of participatory mapping and PGIS is to include the participation of the civil society, to provide qualitative information of their territories. However, focus is not only data, but also the effects that such practices themselves may have on the territory and their society. Access to this information is reduced in contrast to the increase of information disseminated through visualization services, geoinformation, and online cartography. Thus, the analysis of the real scope of participatory methodologies in the use of Geographic Information (GI) is necessary, including the comparison of different geographical contexts. It is also important to know the benefits and disadvantages of access to information needed for planning in different contexts, ranging from unnoticed rural areas and suburbs to influence on government programs on land management through local spatial knowledge. The analysis focused on the participation levels of civil society and the degrees of accessibility of the information (access and use) within the study of Participatory GIS (PGIS). In addition, this work studies GIT (Geographic Information Technologies), online cartographies (Geoweb) and platforms of spatial geovisualization, as resources of Neocartography. A participatory cartographic fieldwork was carried out in Bolivia. Several PGIS projects were evaluated in Northern and Southern countries (comparatively with the context of developing countries), and the results were analyzed for each these different variables. (validation, accessibility, verification,value, identity). The thesis considers that both factors (participation levels and degree of accessibility) affect the (i) validation, verification and quality of the data, (ii) analytical value for planning, and (iii) the identity of a place. The integrated management of all the above cited criteria constitutes an added value that PGISs can contribute to reach an effective planning. Also, it confirms the participatory dimension of PGISs varies according to the context, the centralization of its actors, and to sectorial interests. The resulting information from PGIS practices tends to be restricted by the lack of legislation and by the absence of standard formats, which limits in turn the diffusion and exchange of the information. All of this has repercussions in the effectiveness of a strategic planning and in the viability of the implementation of projects about the territory, and consequentially in the land development levels. The hypothesis is confirmed since all the described elements in PGISs and participatory mapping will act as valid tools in strengthening and improving the effectivity in planning only if they are interconnected and linked amongst themselves. This work, therefore, suggests a methodological proposal when faced with the conventional ways of planning: a new planning route which includes the resources exchange and local participatory determination before any plan is established -. With this, the benefits of participatory methodologies in the management of GI and GIS (Geographic Information Systems) is incorporated as a strategic instrument for development of local identity and optimization in planning processes and territory studies. Finally, the study outlines future work on PGIS maps and Participatory Mapping, such that these could eventually evolve into visual representative instruments that transfer identity values of the territory and its society. In this way, they would contribute to attain a better knowledge, recognition, and appraisement of the territories for their inhabitants and planners.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robert Kennedy's announcement of the assassination of Martin Luther King, Jr., in an Indianapolis urban community that did not revolt in riots on April 4, 1968, provides one significant example in which feelings, energy, and bodily risk resonate alongside the articulated message. The relentless focus on Kennedy's spoken words, in historical biographies and other critical research, presents a problem of isolated effect because the power really comes from elements outside the speech act. Thus, this project embraces the complexities of rhetorical effectivity, which involves such things as the unique situational context, all participants (both Kennedy and his audience) of the speech act, aesthetic argument, and the ethical implications. This version of the story embraces the many voices of the participants through first hand interviews and new oral history reports. Using evidence provided from actual participants in the 1968 Indianapolis event, this project reflects critically upon the world disclosure of the event as it emerges from those remembrances. Phenomenology provides one answer to the constitutive dilemma of rhetorical effectivity that stems from a lack of a framework that gets at questions of ethics, aesthetics, feelings, energy, etc. Thus, this work takes a pedagogical shift away from discourse (verbal/written) as the primary place to render judgments about the effects of communication interaction. With a turn to explore extra-sensory reasoning, by way of the physical, emotional, and numinous, a multi-dimensional look at public address is delivered. The rhetorician will be interested in new ways of assessing effects. The communication ethicist will appreciate the work as concepts like answerability, emotional-volitional tone, and care for the other, come to life via application and consideration of Kennedy's appearance. For argumentation scholars, the interest comes forth in a re-thinking of how we do argumentation. And the critical cultural scholar will find this story ripe with opportunities to uncover the politics of representation, racialized discourse, privilege, power, ideological hegemony, and reconciliation. Through an approach of multiple layers this real-life tale will expose the power of the presence among audience and speaker, emotive argument, as well as the magical turn of fate which all contributes the possibility of a dialogic rhetoric.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tese de mestrado, Doenças Infecciosas Emergentes, Universidade de Lisboa, Faculdade de Medicina, 2016

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this study was to determine whether an ophthalmophakometric technique could offer a feasible means of investigating ocular component contributions to residual astigmatism in human eyes. Current opinion was gathered on the prevalence, magnitude and source of residual astigmatism. It emerged that a comprehensive evaluation of the astigmatic contributions of the eye's internal ocular surfaces and their respective axial separations (effectivity) had not been carried out to date. An ophthalmophakometric technique was developed to measure astigmatism arising from the internal ocular components. Procedures included the measurement of refractive error (infra-red autorefractometry), anterior corneal surface power (computerised video keratography), axial distances (A-scan ultrasonography) and the powers of the posterior corneal surface in addition to both surfaces of the crystalline lens (multi-meridional still flash ophthalmophakometry). Computing schemes were developed to yield the required biometric data. These included (1) calculation of crystalline lens surface powers in the absence of Purkinje images arising from its anterior surface, (2) application of meridional analysis to derive spherocylindrical surface powers from notional powers calculated along four pre-selected meridians, (3) application of astigmatic decomposition and vergence analysis to calculate contributions to residual astigmatism of ocular components with obliquely related cylinder axes, (4) calculation of the effect of random experimental errors on the calculated ocular component data. A complete set of biometric measurements were taken from both eyes of 66 undergraduate students. Effectivity due to corneal thickness made the smallest cylinder power contribution (up to 0.25DC) to residual astigmatism followed by contributions of the anterior chamber depth (up to 0.50DC) and crystalline lens thickness (up to 1.00DC). In each case astigmatic contributions were predominantly direct. More astigmatism arose from the posterior corneal surface (up to 1.00DC) and both crystalline lens surfaces (up to 2.50DC). The astigmatic contributions of the posterior corneal and lens surfaces were found to be predominantly inverse whilst direct astigmatism arose from the anterior lens surface. Very similar results were found for right versus left eyes and males versus females. Repeatability was assessed on 20 individuals. The ophthalmophakometric method was found to be prone to considerable accumulated experimental errors. However, these errors are random in nature so that group averaged data were found to be reasonably repeatable. A further confirmatory study was carried out on 10 individuals which demonstrated that biometric measurements made with and without cycloplegia did not differ significantly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this research was to investigate the effects of Processing Instruction (VanPatten, 1996, 2007), as an input-based model for teaching second language grammar, on Syrian learners’ processing abilities. The present research investigated the effects of Processing Instruction on the acquisition of English relative clauses by Syrian learners in the form of a quasi-experimental design. Three separate groups were involved in the research (Processing Instruction, Traditional Instruction and a Control Group). For assessment, a pre-test, a direct post-test and a delayed post-test were used as main tools for eliciting data. A questionnaire was also distributed to participants in the Processing Instruction group to give them the opportunity to give feedback in relation to the treatment they received in comparison with the Traditional Instruction they are used to. Four hypotheses were formulated on the possible effectivity of Processing Instruction on Syrian learners’ linguistic system. It was hypothesised that Processing Instruction would improve learners’ processing abilities leading to an improvement in learners’ linguistic system. This was expected to lead to a better performance when it comes to the comprehension and production of English relative clauses. The main source of data was analysed statistically using the ANOVA test. Cohen’s d calculations were also used to support the ANOVA test. Cohen’s d showed the magnitude of effects of the three treatments. Results of the analysis showed that both Processing Instruction and Traditional Instruction groups had improved after treatment. However, the Processing Instruction Group significantly outperformed the other two groups in the comprehension of relative clauses. The analysis concluded that Processing Instruction is a useful tool for instructing relative clauses to Syrian learners. This was enhanced by participants’ responses to the questionnaire as they were in favour of Processing Instruction, rather than Traditional Instruction. This research has theoretical and pedagogical implications. Theoretically, the study showed support for the Input hypothesis. That is, it was shown that Processing Instruction had a positive effect on input processing as it affected learners’ linguistic system. This was reflected in learners’ performance where learners were able to produce a structure which they had not been asked to produce. Pedagogically, the present research showed that Processing Instruction is a useful tool for teaching English grammar in the context where the experiment was carried out, as it had a large effect on learners’ performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For industrialised economy of ourdays, remanufacturing represents perhaps the largest unexploited resource and opportunity for realising a greater growth of the economy in an environmental-conscious manner. The aim of this paper is to investigate of the impact of remanufacturing in the economy from an economic-efficiency point of view. In static context this phenomenon was analysed in the literature. We use the multi-sector input–output framework in a dynamic context to study intra-period relationships of the sectors of economy. We extend the classical dynamic input–output model taking into consideration the activity of remanufacturing .We try to answer the question, whether the remanufacturing/reuse increases the growth possibility of an economy. We expose a sufficient condition concerning the effectivity of an economy with remanufacturing. By this evaluation we analyse a possible sustainable development of the economy on the basis of the product recovery management of industries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’année 2014 est marquée par les référendums sur la souveraineté de l’Écosse et de la Catalogne, deux nations partageant de nombreux points communs sur les plans de l’histoire et de la culture. Le cadre juridique pré-référendaire de chacune de ces régions est fondamentalement le même: l’existence juridique de l’Écosse et de la Catalogne est directement issue de la volonté d’un État central unitaire, respectivement le Royaume-Uni et l’Espagne. La compétence législative de tenir un référendum sur l’autodétermination de ces régions est d’ailleurs ambiguë. Devant ce dilemme, le Royaume-Uni permet à l’Écosse d’organiser un référendum sur sa souveraineté. Il en résulte un processus démocratique juste, équitable, décisif et respecté de tous. De son côté, l’Espagne interdit à la Catalogne d’en faire de même, ce qui n’empêche pas Barcelone de tout mettre en œuvre afin de consulter sa population. Il en découle un processus de participation citoyenne n’ayant rien à voir avec un référendum en bonne et due forme. 20 ans après le dernier référendum sur la souveraineté du Québec, l’étude des référendums de l’Écosse et de la Catalogne nous permet de mettre en lumière la justesse, mais aussi l’incohérence partielle des enseignements de la Cour suprême du Canada dans son Renvoi relatif à la sécession du Québec. D’un côté, la nécessité d’équilibrer les principes constitutionnels sous-jacents de démocratie et de constitutionnalisme est mise en exergue. Parallèlement, les concepts de question et de réponse claires, d’effectivité et de négociations post-référendaires prennent une toute autre couleur face à un nouvel impératif absent des conclusions de la Cour suprême : celui des négociations pré-référendaires.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’année 2014 est marquée par les référendums sur la souveraineté de l’Écosse et de la Catalogne, deux nations partageant de nombreux points communs sur les plans de l’histoire et de la culture. Le cadre juridique pré-référendaire de chacune de ces régions est fondamentalement le même: l’existence juridique de l’Écosse et de la Catalogne est directement issue de la volonté d’un État central unitaire, respectivement le Royaume-Uni et l’Espagne. La compétence législative de tenir un référendum sur l’autodétermination de ces régions est d’ailleurs ambiguë. Devant ce dilemme, le Royaume-Uni permet à l’Écosse d’organiser un référendum sur sa souveraineté. Il en résulte un processus démocratique juste, équitable, décisif et respecté de tous. De son côté, l’Espagne interdit à la Catalogne d’en faire de même, ce qui n’empêche pas Barcelone de tout mettre en œuvre afin de consulter sa population. Il en découle un processus de participation citoyenne n’ayant rien à voir avec un référendum en bonne et due forme. 20 ans après le dernier référendum sur la souveraineté du Québec, l’étude des référendums de l’Écosse et de la Catalogne nous permet de mettre en lumière la justesse, mais aussi l’incohérence partielle des enseignements de la Cour suprême du Canada dans son Renvoi relatif à la sécession du Québec. D’un côté, la nécessité d’équilibrer les principes constitutionnels sous-jacents de démocratie et de constitutionnalisme est mise en exergue. Parallèlement, les concepts de question et de réponse claires, d’effectivité et de négociations post-référendaires prennent une toute autre couleur face à un nouvel impératif absent des conclusions de la Cour suprême : celui des négociations pré-référendaires.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työssä luodaan energiapuun varastonhallintamalli ja hankintamalli energiantuotantolaitoksen näkökulmasta sekä kuvataan kustannustehokkaita ja toimitusvarmoja vaihtoehtoja puupolttoaineen varastoinnille ja haketukselle. Varastonhallintamallissa keskitytään varastotason hallintamenetelmiin toimintaympäristössään. Hankintamalli määrittää oman varaston ja suoran laitostoimituksen suhteen sekä auttaa pohtimaan strategisen hankinnan merkitystä hankinnan toteuttamiseen ja hankintakanavien valintaan. Työ antaa vastauksia koko hankintatoiminnan toteutukseen ja hallitsemiseen. Varastonhallintamallin skenaariotarkastelussa selvisi, että yrityksen oma varasto vaatii 18 – 37 % varmuusvaraston suhteessa käyttövarastoon. Hankintamallin mukaan oman varaston kannattavimman puupolttoainejakeen hankintaetäisyys voisi olla keskimäärin korkeintaan 96 km. Tarpeen, saatavuuden, jakeiden kustannustasojen ja toimintaympäristön mahdollisuuksien ollessa selvillä, on mahdollista tehdä päätöksiä hankintakanavista ja varmuusvarastoista kustannustehokkuuden perusteella. Yrityksen polttoainemäärien ohjauksen toteutukseen vaaditaan kehittämistoimia. Oman toimintaympäristön vakiointi ja toimintamallien dokumentointi on tärkeää tiedonjaon, toimitussopimusten mitoittamisen ja toiminnan kehittämisen kannalta. Toiminnan pullonkaulojen vähentäminen ja puupolttoaineen ohjaaminen kustannustehokkaimpien haketusketjujen kautta mahdollisimman tehokkaasti synnyttävät kustannussäästöjä toimitusketjussa.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2015.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Photothermal imaging allows to inspect the structure of composite materials by means of nondestructive tests. The surface of a medium is heated at a number of locations. The resulting temperature field is recorded on the same surface. Thermal waves are strongly damped. Robust schemes are needed to reconstruct the structure of the medium from the decaying time dependent temperature field. The inverse problem is formulated as a weighted optimization problem with a time dependent constraint. The inclusions buried in the medium and their material constants are the design variables. We propose an approximation scheme in two steps. First, Laplace transforms are used to generate an approximate optimization problem with a small number of stationary constraints. Then, we implement a descent strategy alternating topological derivative techniques to reconstruct the geometry of inclusions with gradient methods to identify their material parameters. Numerical simulations assess the effectivity of the technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La presente investigación se planteó reemplazar el uso de insecticidas sintéticos, formulando un champú bioinsecticida de aplicación canina mediante la acción biocida del aceite esencial deAmbrosia arborescens Mill (Altamisa). La planta se recolectó en las laderas del rio Tomebamba, cercanas al Campus Balzay de la Universidad de Cuenca Parroquia San Joaquín. La recolección se realizó durante los meses de Enero a Marzo del 2016. El desarrollo y formulación del producto se realizó en el Laboratorio de Biotecnología, Facultad de Ciencias Químicas de la Universidad de Cuenca. La obtención del aceite esencial de A. arborescens se realizó mediante hidrodestilación por el método Clevenger, con un rendimiento del 0,14%. La actividad biocida se estableció en un ensayo “in vitro” ante el nematodo Panagrellus redivirus, determinándose la dosis letal (DL50) de 250 uL/mL. Debido a la moderada DL50y bajo rendimiento, se planteó como estrategia, determinar el DL50 del extracto orgánico de A. arborescens, el cual se obtuvo mediante una extracción con metanol, consiguiendo un rendimiento del 2 % y DL50de 31,25 uL/mL. De acuerdo estos resultados se procedió a realizar pruebas en pulgas de perros(Ctenocephalides canis) con el extracto de A. arborescens, estableciendo una efectividad del 100 % a la concentración de 46,875 mg/mL en el periodo de tiempo más corto, siendo esta la dosis aplicada para la formulación del champú. El extracto metanólico de A. arborescens presentó elevada actividad biocida, comparado con el aceite esencial. Esta sustancia activa es promisoria en la formulación de bioinsecticidas para mascotas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Direito, Programa de Pós-Graduação em Direito, Estado e Constituição, 2016.