928 resultados para BLOWN PACK
Resumo:
[EN] Background This study aims to design an empirical test on the sensitivity of the prescribing doctors to the price afforded for the patient, and to apply it to the population data of primary care dispensations for cardiovascular disease and mental illness in the Spanish National Health System (NHS). Implications for drug policies are discussed. Methods We used population data of 17 therapeutic groups of cardiovascular and mental illness drugs aggregated by health areas to obtain 1424 observations ((8 cardiovascular groups * 70 areas) + (9 psychotropics groups * 96 areas)). All drugs are free for pensioners. For non-pensioner patients 10 of the 17 therapeutic groups have a reduced copayment (RC) status of only 10% of the price with a ceiling of €2.64 per pack, while the remaining 7 groups have a full copayment (FC) rate of 40%. Differences in the average price among dispensations for pensioners and non-pensioners were modelled with multilevel regression models to test the following hypothesis: 1) in FC drugs there is a significant positive difference between the average prices of drugs prescribed to pensioners and non-pensioners; 2) in RC drugs there is no significant price differential between pensioner and non-pensioner patients; 3) the price differential of FC drugs prescribed to pensioners and non-pensioners is greater the higher the price of the drugs. Results The average monthly price of dispensations to pensioners and non-pensioners does not differ for RC drugs, but for FC drugs pensioners get more expensive dispensations than non-pensioners (estimated difference of €9.74 by DDD and month). There is a positive and significant effect of the drug price on the differential price between pensioners and non-pensioners. For FC drugs, each additional euro of the drug price increases the differential by nearly half a euro (0.492). We did not find any significant differences in the intensity of the price effect among FC therapeutic groups. Conclusions Doctors working in the Spanish NHS seem to be sensitive to the price that can be afforded by patients when they fill in prescriptions, although alternative hypothesis could also explain the results found.
Resumo:
Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
L’imballaggio alimentare si può definire come un sistema coordinato per disporre i beni per il trasporto, la distribuzione, la conservazione, la vendita e l’utilizzo. Uno dei materiali maggiormente impiegati, nell’industria alimentare, per la produzione di imballaggi sono le materie plastiche. Esse sono sostanze organiche derivanti da petrolio greggio, sono composti solidi allo stato finito, ma possono essere modellate allo stato fluido. Un imballaggio alimentare deve svolgere determinate funzioni tra cui: - contenimento del prodotto - protezione del prodotto da agenti esterni - logistica - comunicativa - funzionale - ecologica L'ultimo punto sopracitato è il principale problema delle materie plastiche derivanti dal petrolio greggio. Questi materiali sono difficilmente riciclabili perché spesso un imballaggio è composto da più materiali stratificati o perché si trova a diretto contatto con gli alimenti. Inoltre questi materiali hanno un lungo tempo di degradazione (da 100 a 1000 anni) che ne rendono difficile e costoso lo smaltimento. Per questo nell’ultimo decennio è cominciata la ricerca di un materiale plastico, flessibile alle esigenze industriali e nel contempo biodegradabile. Una prima idea è stata quella di “imitare la natura” cercando di replicare macromolecole già esistenti (derivate da amido e zuccheri) per ottenere una sostanza plastico-simile utilizzabile per gli stessi scopi, ma biodegradabile in circa sei mesi. Queste bioplastiche non hanno preso piede per l’alto costo di produzione e perché risulta impossibile riconvertire impianti di produzione in tutto il mondo in tempi brevi. Una seconda corrente di pensiero ha indirizzato i propri sforzi verso l’utilizzo di speciali additivi aggiunti in minima misura (1%) ai classici materiali plastici e che ne permettono la biodegradazione in un tempo inferiore ai tre anni. Un esempio di questo tipo di additivi è l’ECM Masterbatch Pellets che è un copolimero di EVA (etilene vinil acetato) che aggiunto alle plastiche tradizionali rende il prodotto finale completamente biodegradabile pur mantenendo le proprie caratteristiche. Scopo di questo lavoro di tesi è stato determinare le modificazioni di alcuni parametri qualitativi di nettarine di Romagna(cv.-Alexa®) confezionate-con-film-plastici-tradizionali-e-innovativi. I campioni di nettarine sono stati confezionati in cestini in plastica da 1 kg (sigillati con un film flow-pack macroforato) di tipo tradizionale in polipropilene (campione denominato TRA) o vaschette in polipropilene additivato (campione denominato BIO) e conservati a 4°C e UR 90-95% per 7 giorni per simulare un trasporto refrigerato successivamente i campioni sono stati posti in una camera a 20°C e U.R. 50% per 4 giorni al fine di simulare una conservazione al punto vendita. Al tempo 0 e dopo 4, 7, 9 e 11 giorni sono state effettuate le seguenti analisi: - coefficiente di respirazione è stato misurata la quantità di CO2 prodotta - indice di maturazione espresso come rapporto tra contenuto in solidi solubili e l’acidità titolabile - analisi di immagine computerizzata - consistenza della polpa del frutto è stata misurata attraverso un dinamometro Texture Analyser - contenuto in solidi totali ottenuto mediante gravimetria essiccando i campioni in stufa sottovuoto - caratteristiche sensoriali (Test Accettabilità) Conclusioni In base ai risultati ottenuti i due campioni non hanno fatto registrare dei punteggi significativamente differenti durante tutta la conservazione, specialmente per quanto riguarda i punteggi sensoriali, quindi si conclude che le vaschette biodegradabili additivate non influenzano la conservazione delle nettarine durante la commercializzazione del prodotto limitatamente ai parametri analizzati. Si ritiene opportuno verificare se il processo di degradazione del polimero additivato si inneschi già durante la commercializzazione della frutta e soprattutto verificare se durante tale processo vengano rilasciati dei gas che possono accelerare la maturazione dei frutti (p.e. etilene), in quanto questo spiegherebbe il maggiore tasso di respirazione e la più elevata velocità di maturazione dei frutti conservati in tali vaschette. Alimentary packaging may be defined as a coordinate system to dispose goods for transport, distribution, storage, sale and use. Among materials most used in the alimentary industry, for the production of packaging there are plastics materials. They are organic substances deriving from crude oil, solid compounds in the ended state, but can be moulded in the fluid state. Alimentary packaging has to develop determinated functions such as: - Product conteniment - Product protection from fieleders agents - logistic - communicative - functional - ecologic This last term is the main problem of plastic materials deriving from crude oil. These materials are hardly recyclable because a packaging is often composed by more stratified materials or because it is in direct contact with aliments. Beside these materials have a long degradation time(from 100 to 1000 years) that make disposal difficult and expensive. For this reason in the last decade the research for a new plastic material is begin, to make industrial demands more flexible and, at the same time, to make this material biodegradable: At first, the idea to “imitate the nature” has been thought, trying to reply macromolecules already existents (derived from amid and sugars) to obtain a similar-plastic substance that can be used for the same purposes, but it has to be biodegradable in about six months. These bioplastics haven’t more success bacause of the high production cost and because reconvert production facilities of all over the wolrd results impossible in short times. At second, the idea to use specials addictives has been thought. These addictives has been added in minim measure (1%) to classics plastics materials and that allow the biodegradation in a period of time under three years. An example of this kind of addictives is ECM Masterbatch Pellets which is a coplymer of EVA (Ethylene vinyl acetate) that, once it is added to tradizional plastics, make final product completely biodegradable however maintaining their own attributes. The objective of this thesis work has been to determinate modifications of some Romagna’s Nectarines’ (cv. Alexa®) qualitatives parameters which have been packaged-with traditional and innovative-plastic film. Nectarines’ samples have been packaged in plastic cages of 1 kg (sealed with a macro-drilled flow-pack film) of traditional type in polypropylene (sample named TRA) or trays in polypropylene with addictives (sample named BIO) and conservated at 4°C and UR 90-95% for 7 days to simulate a refrigerated transport. After that, samples have been put in a camera at 20°C and U.R. 50% for 4 days to simulate the conservation in the market point. At the time 0 and after 4, 7, 9 and 11 days have been done the following analaysis: - Respiration coefficient wherewith the amount CO2 producted has been misurated - Maturation index which is expressed as the ratio between solid soluble content and the titratable acidity - Analysis of computing images - Consistence of pulp of the fruit that has been measured through Texture Analyser Dynanometer - Content in total solids gotten throught gravimetry by the drying of samples in vacuum incubator - Sensorial characteristic (Panel Test) Consequences From the gotten results, the two samples have registrated no significative different scores during all the conservation, expecially about the sensorial scores, so it’s possible to conclude that addictived biodegradable trays don’t influence the Nectarines’ conservation during the commercialization of the product qualifiedly to analized parameters. It’s advised to verify if the degradation process of the addicted polymer may begin already during the commercialization of the fruit and in particular to verify if during this process some gases could be released which can accelerate the maturation of fruits (p.e. etylene), because all this will explain the great respiration rate and the high speed of the maturation of fruits conservated in these trays.
Resumo:
Protein aggregation and formation of insoluble aggregates in central nervous system is the main cause of neurodegenerative disease. Parkinson’s disease is associated with the appearance of spherical masses of aggregated proteins inside nerve cells called Lewy bodies. α-Synuclein is the main component of Lewy bodies. In addition to α-synuclein, there are more than a hundred of other proteins co-localized in Lewy bodies: 14-3-3η protein is one of them. In order to increase our understanding on the aggregation mechanism of α-synuclein and to study the effect of 14-3-3η on it, I addressed the following questions. (i) How α-synuclein monomers pack each other during aggregation? (ii) Which is the role of 14-3-3η on α-synuclein packing during its aggregation? (iii) Which is the role of 14-3-3η on an aggregation of α-synuclein “seeded” by fragments of its fibrils? In order to answer these questions, I used different biophysical techniques (e.g., Atomic force microscope (AFM), Nuclear magnetic resonance (NMR), Surface plasmon resonance (SPR) and Fluorescence spectroscopy (FS)).
Resumo:
Das Ziel dieser Arbeit besteht darin, die Möglichkeiten der Sprühtrocknung für die Generierung von Inhalationspulvern zur Therapie von Lungenkrankheiten zu nutzen. Die Erzeugung von physikalisch stabilen und leicht dispergierbaren Partikeln steht hierbei im Vordergrund. Aufgrund von physiko-chemischen Untersuchungen (Glasübergangstemperatur, Fragilität, Relaxationsverhalten, Hygroskopizität) unterschiedlicher amorpher Hilfsstoffe (Lactose, Raffinose, Dextrane, Cyclodextrine) ist für Hydroxypropyl-β-Cyclodextrin das größte Potential für die Stabilisierung eines Wirkstoffes innerhalb einer amorphen Matrix erkennbar. Sprühgetrocknete Partikel weisen im Vergleich zu strahlgemahlenen Partikeln günstigere Dispergier- und Depositionseigenschaften auf. Dies ist vorrangig auf größere Berührungsflächen zwischen strahlgemahlenen Partikeln zurückzuführen. Kugelförmige sprühgetrocknete Partikel besitzen dagegen aufgrund einer punktförmigen Berührung geringere Haftkräfte. Versuche mit unterschiedlich stark gefalteten Partikeloberflächen weisen auf geringere Haftkräfte hin, wenn sich die Partikel an Stellen geringerer Krümmungsradien berühren. Dispergierversuche in einer definierten Rohrströmung (Deagglomerator) lassen auf einen kaskadenartigen Agglomeratzerfall schließen. Durch Sprüheinbettung unterschiedlicher Modellwirkstoffe (Salbutamolsulfat, Ipratropiumbromid, Budesonid) in Hydroxypropyl-β-Cyclodextrin konnten sowohl Einzelformulierungen als auch eine Kombinationsformulierung mit allen drei Wirkstoffen erzeugt werden. Diese weisen bei einem Wirkstoffgehalt bis max. 14% selbst nach vierwöchiger Offenlagerung bei 40°C und 75% r.F. keine bzw. nur geringfügige Veränderungen in der „Fine Particle Dose“ (FPD) auf. Die „Fine Particle Fraction“ (FPF) liegt bei diesen Formulierungen im Bereich von 40% bis 75%. In Verbindung mit einem geeigneten Pack- bzw. Trockenmittel, ist hierbei mit einer physikalischen Stabilität zu rechnen, die eine sinnvolle Produktlaufzeit eines Inhalationspulvers ermöglicht. Formulierungen mit höheren Wirkstoffkonzentrationen zeigen dagegen stärkere Veränderungen nach Stresslagerung. Als Beispiel einer kristallinen Sprühtrocknungsformulierung konnte ein Pulver bestehend aus Mannitol und Budesonid erzeugt werden.
Resumo:
The specific energy of lithium-ion batteries (LIBs) is today 200 Wh/kg, a value not sufficient to power fully electric vehicles with a driving range of 400 km which requires a battery pack of 90 kWh. To deliver such energy the battery weight should be higher than 400 kg and the corresponding increase of vehicle mass would narrow the driving range to 280 km. Two main strategies are pursued to improve the energy of the rechargeable lithium batteries up to the transportation targets. The first is the increase of LIBs working voltage by using high-voltage cathode materials. The second is the increase of battery capacity by the development of a cell chemistry where oxygen redox reaction (ORR) occurs at the cathode and metal lithium is the anode (Li/O2 battery). This PhD work is focused on the development of high-voltage safe cathodes for LIBs, and on the investigation of the feasibility of Li/O2 battery operating with ionic liquid(IL)-based electrolytes. The use of LiMn1-xFexPO4 as high-voltage cathode material is discussed. Synthesis and electrochemical tests of three different phosphates, more safe cathode materials than transition metal oxides, are reported. The feasibility of Li/O2 battery operating in IL-based electrolytes is also discussed. Three aspects have been investigated: basic aspects of ORR, synthesis and characterization of porous carbons as positive electrode materials and study of limiting factors to the electrode capacity and cycle-life. Regarding LIBs, the findings on LiMnPO4 prepared by soluble precursors demonstrate that a good performing Mn-based olivine is viable without the coexistence of iron. Regarding Li/O2 battery, the oxygen diffusion coefficient and concentration values in different ILs were obtained. This work highlighted that the O2 mass transport limits the Li/O2 capacity at high currents; it gave indications on how to increase battery capacity by using a flow-cell and a porous carbon as cathode.
Resumo:
This PhD Thesis includes five main parts on diverse topics. The first two parts deal with the trophic ecology of wolves in Italy consequently to a recent increase of wild ungulates abundance. Data on wolf diet across time highlighted how wild ungulates are important food resource for wolves in Italy. Increasing wolf population, increasing numbers of wild ungulates and decreasing livestock consume are mitigating wolf-man conflicts in Italy in the near future. In the third part, non-invasive genetic sampling techniques were used to obtain genotypes and genders of about 400 wolves. Thus, wolf packs were genetically reconstructed using diverse population genetic and parentage software. Combining the results on pack structure and genetic relatedness with sampling locations, home ranges of wolf packs and dispersal patterns were identified. These results, particularly important for the conservation management of wolves in Italy, illustrated detailed information that can be retrieved from genetic identification of individuals. In the fourth part, wolf locations were combined with environmental information obtained as GIS-layers. Modern species distribution models (niche models) were applied to infer potential wolf distribution and predation risk. From the resulting distribution maps, information pastures with the highest risk of depredation were derived. This is particularly relevant as it allows identifying those areas under danger of carnivore attack on livestock. Finally, in the fifth part, habitat suitability models were combined with landscape genetic analysis. On one side landscape genetic analyses on the Italian wolves provided new information on the dynamics and connectivity of the population and, on the other side, a profound analysis of the effects that habitat suitability methods had on the parameterization of landscape genetic analyses was carried out to contributed significantly to landscape genetic theory.
Resumo:
Global climate change in recent decades has strongly influenced the Arctic generating pronounced warming accompanied by significant reduction of sea ice in seasonally ice-covered seas and a dramatic increase of open water regions exposed to wind [Stephenson et al., 2011]. By strongly scattering the wave energy, thick multiyear ice prevents swell from penetrating deeply into the Arctic pack ice. However, with the recent changes affecting Arctic sea ice, waves gain more energy from the extended fetch and can therefore penetrate further into the pack ice. Arctic sea ice also appears weaker during melt season, extending the transition zone between thick multi-year ice and the open ocean. This region is called the Marginal Ice Zone (MIZ). In the Arctic, the MIZ is mainly encountered in the marginal seas, such as the Nordic Seas, the Barents Sea, the Beaufort Sea and the Labrador Sea. Formed by numerous blocks of sea ice of various diameters (floes) the MIZ, under certain conditions, allows maritime transportation stimulating dreams of industrial and touristic exploitation of these regions and possibly allowing, in the next future, a maritime connection between the Atlantic and the Pacific. With the increasing human presence in the Arctic, waves pose security and safety issues. As marginal seas are targeted for oil and gas exploitation, understanding and predicting ocean waves and their effects on sea ice become crucial for structure design and for real time safety of operations. The juxtaposition of waves and sea ice represents a risk for personnel and equipment deployed on ice, and may complicate critical operations such as platform evacuations. The risk is difficult to evaluate because there are no long-term observations of waves in ice, swell events are difficult to predict from local conditions, ice breakup can occur on very short time-scales and wave-ice interactions are beyond the scope of current forecasting models [Liu and Mollo-Christensen, 1988,Marko, 2003]. In this thesis, a newly developed Waves in Ice Model (WIM) [Williams et al., 2013a,Williams et al., 2013b] and its related Ocean and Sea Ice model (OSIM) will be used to study the MIZ and the improvements of wave modeling in ice infested waters. The following work has been conducted in collaboration with the Nansen Environmental and Remote Sensing Center and within the SWARP project which aims to extend operational services supporting human activity in the Arctic by including forecast of waves in ice-covered seas, forecast of sea-ice in the presence of waves and remote sensing of both waves and sea ice conditions. The WIM will be included in the downstream forecasting services provided by Copernicus marine environment monitoring service.
Resumo:
It is barely 15 years since, in 1996, the issue theme of Schizophrenia Bulletin (Vol 22, 2) “Early Detection, and Intervention in Schizophrenia” signified the commencement of this field of research. Since that time the field of early detection research has developed rapidly and it may be translated into clinical practice by the introduction of an Attenuated Psychosis Syndrome in Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, (DSM-5) (www.dsm5.org/ProposedRevisions/Pages/proposedrevision.aspx?rid=412#). Attenuated psychotic symptoms (APS) had first been suggested as a clinical predictor of first-episode psychosis by the Personal Assessment and Crisis Evaluation (PACE) Clinic group as part of the ultrahigh risk (UHR) criteria.1 The term ultrahigh risk became broadly accepted for this set of criteria for imminent risk of developing psychosis in the late 1990s. The use of the term “prodrome” for a state characterized by at-risk (AR) criteria was criticized as a retrospective concept inevitably followed by the full-blown disorder.1 Although alternative terms have been suggested, prodrome is still used in prospective studies (eg, prodromally symptomatic, potentially or putatively prodromal, prodrome-like state/symptoms). Some alternative suggestions such as prepsychotic state/symptoms, subthreshold psychotic symptoms, early psychosis, subsyndromal psychosis, hypopsychosis, or subpsychosis were short-lived. Other terms still in use include UHR, at-risk mental state (ARMS), AR, high risk, clinical high risk (CHR), or early and late AR state. Further, the term psychotic-like experiences (PLEs) has recently (re-)entered early detection research. …
Resumo:
The present study validated the accuracy of data from a self-reported questionnaire on smoking behaviour with the use of exhaled carbon monoxide (CO) level measurements in two groups of patients. Group 1 included patients referred to an oral medicine unit, whereas group 2 was recruited from the daily outpatient service. All patients filled in a standardized questionnaire regarding their current and former smoking habits. Additionally, exhaled CO levels were measured using a monitor. A total of 121 patients were included in group 1, and 116 patients were included in group 2. The mean value of exhaled CO was 7.6 ppm in the first group and 9.2 ppm in the second group. The mean CO values did not statistically significantly differ between the two groups. The two exhaled CO level measurements taken for each patient exhibited very good correlation (Spearman's coefficient of 0.9857). Smokers had a mean difference of exhaled CO values of 13.95 ppm (p < 0.001) compared to non-smokers adjusted for the first or second group. The consumption of one additional pack year resulted in an increase in CO values of 0.16 ppm (p = 0.003). The consumption of one additional cigarette per day elevated the CO measurements by 0.88 ppm (p < 0.001). Based on these results, the correlations between the self-reported smoking habits and exhaled CO values are robust and highly reproducible. CO monitors may offer a non-invasive method to objectively assess current smoking behaviour and to monitor tobacco use cessation attempts in the dental setting.
Resumo:
OBJECTIVE: Schizotypal features indicate proneness to psychosis in the general population. It is also possible that they increase transition to psychosis (TTP) among clinical high-risk patients (CHR). Our aim was to investigate whether schizotypal features predict TTP in CHR patients. METHODS: In the EPOS (European Prediction of Psychosis Study) project, 245 young help-seeking CHR patients were prospectively followed for 18 months and their TTP was identified. At baseline, subjects were assessed with the Schizotypal Personality Questionnaire (SPQ). Associations between SPQ items and its subscales with the TTP were analysed in Cox regression analysis. RESULTS: The SPQ subscales and items describing ideas of reference and lack of close interpersonal relationships were found to correlate significantly with TTP. The co-occurrence of these features doubled the risk of TTP. CONCLUSIONS: Presence of ideas of reference and lack of close interpersonal relations increase the risk of full-blown psychosis among CHR patients. This co-occurrence makes the risk of psychosis very high.
Resumo:
BACKGROUND: Opportunistic screening for genital chlamydia infection is being introduced in England, but evidence for the effectiveness of this approach is lacking. There are insufficient data about young peoples' use of primary care services to determine the potential coverage of opportunistic screening in comparison with a systematic population-based approach. AIM: To estimate use of primary care services by young men and women; to compare potential coverage of opportunistic chlamydia screening with a systematic postal approach. DESIGN OF STUDY: Population based cross-sectional study. SETTING: Twenty-seven general practices around Bristol and Birmingham. METHOD: A random sample of patients aged 16-24 years were posted a chlamydia screening pack. We collected details of face-to-face consultations from general practice records. Survival and person-time methods were used to estimate the cumulative probability of attending general practice in 1 year and the coverage achieved by opportunistic and systematic postal chlamydia screening. RESULTS: Of 12 973 eligible patients, an estimated 60.4% (95% confidence interval [CI] = 58.3 to 62.5%) of men and 75.3% (73.7 to 76.9%) of women aged 16-24 years attended their practice at least once in a 1-year period. During this period, an estimated 21.3% of patients would not attend their general practice but would be reached by postal screening, 9.2% would not receive a postal invitation but would attend their practice, and 11.8% would be missed by both methods. CONCLUSIONS: Opportunistic and population-based approaches to chlamydia screening would both fail to contact a substantial minority of the target group, if used alone. A pragmatic approach combining both strategies might achieve higher coverage.
Resumo:
We propose robust and e±cient tests and estimators for gene-environment/gene-drug interactions in family-based association studies. The methodology is designed for studies in which haplotypes, quantitative pheno- types and complex exposure/treatment variables are analyzed. Using causal inference methodology, we derive family-based association tests and estimators for the genetic main effects and the interactions. The tests and estimators are robust against population admixture and strati¯cation without requiring adjustment for confounding variables. We illustrate the practical relevance of our approach by an application to a COPD study. The data analysis suggests a gene-environment interaction between a SNP in the Serpine gene and smok- ing status/pack years of smoking that reduces the FEV1 volume by about 0.02 liter per pack year of smoking. Simulation studies show that the pro- posed methodology is su±ciently powered for realistic sample sizes and that it provides valid tests and effect size estimators in the presence of admixture and stratification.