919 resultados para cost-per-wear model
Resumo:
Phenol and cresols represent a good example of primary chemical building blocks of which 2.8 million tons are currently produced in Europe each year. Currently, these primary phenolic building blocks are produced by refining processes from fossil hydrocarbons: 5% of the world-wide production comes from coal (which contains 0.2% of phenols) through the distillation of the tar residue after the production of coke, while 95% of current world production of phenol is produced by the distillation and cracking of crude oil. In nature phenolic compounds are present in terrestrial higher plants and ferns in several different chemical structures while they are essentially absent in lower organisms and in animals. Biomass (which contain 3-8% of phenols) represents a substantial source of secondary chemical building blocks presently underexploited. These phenolic derivatives are currently used in tens thousand of tons to produce high cost products such as food additives and flavours (i.e. vanillin), fine chemicals (i.e. non-steroidal anti-inflammatory drugs such as ibuprofen or flurbiprofen) and polymers (i.e. poly p-vinylphenol, a photosensitive polymer for electronic and optoelectronic applications). European agrifood waste represents a low cost abundant raw material (250 millions tons per year) which does not subtract land use and processing resources from necessary sustainable food production. The class of phenolic compounds is essentially constituted by simple phenols, phenolic acids, hydroxycinnamic acid derivatives, flavonoids and lignans. As in the case of coke production, the removal of the phenolic contents from biomass upgrades also the residual biomass. Focusing on the phenolic component of agrifood wastes, huge processing and marketing opportunities open since phenols are used as chemical intermediates for a large number of applications, ranging from pharmaceuticals, agricultural chemicals, food ingredients etc. Following this approach we developed a biorefining process to recover the phenolic fraction of wheat bran based on enzymatic commercial biocatalysts in completely water based process, and polymeric resins with the aim of substituting secondary chemical building blocks with the same compounds naturally present in biomass. We characterized several industrial enzymatic product for their ability to hydrolize the different molecular features that are present in wheat bran cell walls structures, focusing on the hydrolysis of polysaccharidic chains and phenolics cross links. This industrial biocatalysts were tested on wheat bran and the optimized process allowed to liquefy up to the 60 % of the treated matter. The enzymatic treatment was also able to solubilise up to the 30 % of the alkali extractable ferulic acid. An extraction process of the phenolic fraction of the hydrolyzed wheat bran based on an adsorbtion/desorption process on styrene-polyvinyl benzene weak cation-exchange resin Amberlite IRA 95 was developed. The efficiency of the resin was tested on different model system containing ferulic acid and the adsorption and desorption working parameters optimized for the crude enzymatic hydrolyzed wheat bran. The extraction process developed had an overall yield of the 82% and allowed to obtain concentrated extracts containing up to 3000 ppm of ferulic acid. The crude enzymatic hydrolyzed wheat bran and the concentrated extract were finally used as substrate in a bioconversion process of ferulic acid into vanillin through resting cells fermentation. The bioconversion process had a yields in vanillin of 60-70% within 5-6 hours of fermentation. Our findings are the first step on the way to demonstrating the economical feasibility for the recovery of biophenols from agrifood wastes through a whole crop approach in a sustainable biorefining process.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
In this report it was designed an innovative satellite-based monitoring approach applied on the Iraqi Marshlands to survey the extent and distribution of marshland re-flooding and assess the development of wetland vegetation cover. The study, conducted in collaboration with MEEO Srl , makes use of images collected from the sensor (A)ATSR onboard ESA ENVISAT Satellite to collect data at multi-temporal scales and an analysis was adopted to observe the evolution of marshland re-flooding. The methodology uses a multi-temporal pixel-based approach based on classification maps produced by the classification tool SOIL MAPPER ®. The catalogue of the classification maps is available as web service through the Service Support Environment Portal (SSE, supported by ESA). The inundation of the Iraqi marshlands, which has been continuous since April 2003, is characterized by a high degree of variability, ad-hoc interventions and uncertainty. Given the security constraints and vastness of the Iraqi marshlands, as well as cost-effectiveness considerations, satellite remote sensing was the only viable tool to observe the changes taking place on a continuous basis. The proposed system (ALCS – AATSR LAND CLASSIFICATION SYSTEM) avoids the direct use of the (A)ATSR images and foresees the application of LULCC evolution models directly to „stock‟ of classified maps. This approach is made possible by the availability of a 13 year classified image database, conceived and implemented in the CARD project (http://earth.esa.int/rtd/Projects/#CARD).The approach here presented evolves toward an innovative, efficient and fast method to exploit the potentiality of multi-temporal LULCC analysis of (A)ATSR images. The two main objectives of this work are both linked to a sort of assessment: the first is to assessing the ability of modeling with the web-application ALCS using image-based AATSR classified with SOIL MAPPER ® and the second is to evaluate the magnitude, the character and the extension of wetland rehabilitation.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
Membrane-based separation processes are acquiring, in the last years, an increasing importance because of their intrinsic energetic and environmental sustainability: some types of polymeric materials, showing adequate perm-selectivity features, appear rather suitable for these applications, because of their relatively low cost and easy processability. In this work have been studied two different types of polymeric membranes, in view of possible applications to the gas separation processes, i.e. Mixed Matrix Membranes (MMMs) and high free volume glassy polymers. Since the early 90’s, it has been understood that the performances of polymeric materials in the field of gas separations show an upper bound in terms of permeability and selectivity: in particular, an increase of permeability is often accompanied by a decrease of selectivity and vice-versa, while several inorganic materials, like zeolites or silica derivates, can overcome this limitation. As a consequence, it has been developed the idea of dispersing inorganic particles in polymeric matrices, in order to obtain membranes with improved perm-selectivity features. In particular, dispersing fumed silica nanoparticles in high free volume glassy polymers improves in all the cases gases and vapours permeability, while the selectivity may either increase or decrease, depending upon material and gas mixture: that effect is due to the capacity of nanoparticles to disrupt the local chain packing, increasing the dimensions of excess free volume elements trapped in the polymer matrix. In this work different kinds of MMMs were fabricated using amorphous Teflon® AF or PTMSP and fumed silica: in all the cases, a considerable increase of solubility, diffusivity and permeability of gases and vapours (n-alkanes, CO2, methanol) was observed, while the selectivity shows a non-monotonous trend with filler fraction. Moreover, the classical models for composites are not able to capture the increase of transport properties due to the silica addition, so it has been necessary to develop and validate an appropriate thermodynamic model that allows to predict correctly the mass transport features of MMMs. In this work, another material, called poly-trimethylsilyl-norbornene (PTMSN) was examined: it is a new generation high free volume glassy polymer that, like PTMSP, shows unusual high permeability and selectivity levels to the more condensable vapours. These two polymer differ each other because PTMSN shows a more pronounced chemical stability, due to its structure double-bond free. For this polymer, a set of Lattice Fluid parameters was estimated, making possible a comparison between experimental and theoretical solubility isotherms for hydrocarbons and alcoholic vapours: the successfully modelling task, based on application of NELF model, offers a reliable alternative to direct sorption measurement, which is extremely time-consuming due to the relevant relaxation phenomena showed by each sorption step. For this material also dilation experiments were performed, in order to quantify its dimensional stability in presence of large size, swelling vapours.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
I crescenti volumi di traffico che interessano le pavimentazioni stradali causano sollecitazioni tensionali di notevole entità che provocano danni permanenti alla sovrastruttura. Tali danni ne riducono la vita utile e comportano elevati costi di manutenzione. Il conglomerato bituminoso è un materiale multifase composto da inerti, bitume e vuoti d'aria. Le proprietà fisiche e le prestazioni della miscela dipendono dalle caratteristiche dell'aggregato, del legante e dalla loro interazione. L’approccio tradizionalmente utilizzato per la modellazione numerica del conglomerato bituminoso si basa su uno studio macroscopico della sua risposta meccanica attraverso modelli costitutivi al continuo che, per loro natura, non considerano la mutua interazione tra le fasi eterogenee che lo compongono ed utilizzano schematizzazioni omogenee equivalenti. Nell’ottica di un’evoluzione di tali metodologie è necessario superare questa semplificazione, considerando il carattere discreto del sistema ed adottando un approccio di tipo microscopico, che consenta di rappresentare i reali processi fisico-meccanici dai quali dipende la risposta macroscopica d’insieme. Nel presente lavoro, dopo una rassegna generale dei principali metodi numerici tradizionalmente impiegati per lo studio del conglomerato bituminoso, viene approfondita la teoria degli Elementi Discreti Particellari (DEM-P), che schematizza il materiale granulare come un insieme di particelle indipendenti che interagiscono tra loro nei punti di reciproco contatto secondo appropriate leggi costitutive. Viene valutata l’influenza della forma e delle dimensioni dell’aggregato sulle caratteristiche macroscopiche (tensione deviatorica massima) e microscopiche (forze di contatto normali e tangenziali, numero di contatti, indice dei vuoti, porosità, addensamento, angolo di attrito interno) della miscela. Ciò è reso possibile dal confronto tra risultati numerici e sperimentali di test triassiali condotti su provini costituiti da tre diverse miscele formate da sfere ed elementi di forma generica.
Resumo:
This doctoral thesis is devoted to the study of the causal effects of the maternal smoking on the delivery cost. The interest of economic consequences of smoking in pregnancy have been studied fairly extensively in the USA, and very little is known in European context. To identify the causal relation between different maternal smoking status and the delivery cost in the Emilia-Romagna region two distinct methods were used. The first - geometric multidimensional - is mainly based on the multivariate approach and involves computing and testing the global imbalance, classifying cases in order to generate well-matched comparison groups, and then computing treatment effects. The second - structural modelling - refers to a general methodological account of model-building and model-testing. The main idea of this approach is to decompose the global mechanism into sub-mechanisms though a recursive decomposition of a multivariate distribution.
Resumo:
In questo elaborato viene presentato un nuovo modello di costo per le matrici per estrusione basato su un approccio feature-based. Nel particolare si è cercato di definire il costo di questi prodotti sulla base delle loro caratteristiche geometriche e tecnologiche.
Resumo:
Nella prima parte di questo progetto di tesi, ho analizzato tutte le nozioni teoriche rilevanti in merito alla teoria della transizione. Il primo concetto condiviso in questa trattazione è quello di transizione. Nella parte finale del capitolo, il focus si sposta sul ruolo, in una generica transizione, delle nicchie. Lo strumento centrale in questa struttura sono gli esperimenti di transizione, i quali forniscono un approccio alternativo ai progetti di innovazione classica che sono incentrati nell'ottenimento di soluzioni a breve termine. Vi è dunque una forte relazione tra nicchia e sperimentazione. Infine la trattazione si concentra sul tema dello Strategic Niche Management. Nel secondo capitolo, analizzo il tema della sostenibilità inserita in un contesto universitario. Questa sezione si focalizza sulle strategie di alto livello richieste per dare avvio alla transizione universitaria verso la sostenibilità, identificando gli ostacoli e gli elementi portanti, e definendo una vision al fine di concretizzarla. Il capitolo guida, passo per passo, le università che tentano di mettere in pratica il proprio obiettivo e la vision di sviluppo sostenibile. Una delle problematiche principali per stimare gli sforzi verso la sostenibilità nelle università è costituita in modo particolare dagli strumenti di valutazione. Per questo motivo, è stata sviluppata la valutazione grafica della sostenibilità nell'università (GASU). Al fine di riassumere quanto detto fin qui ed avere un quadro generale più chiaro dell'organizzazione di un campus universitario che mira a diventare sostenibile, ho utilizzato lo strumento gestionale della SWOT Analysis. Negli ultimi due capitoli, infine, analizzo nel dettaglio il modello Green Office. La teorizzazione di questo modello e l'elaborazione dei 6 principi del Green Office sono state effettuate da rootAbility. Le seguenti pagine presentano 3 casi studio di come i 6 principi dei Green Office sono stati adattati alle 3 unità di sostenibilità guidate da studenti e supportate da staff qualificato. L'oggetto della trattazione sono i principali GO affermatisi nei Paesi Bassi. A seguito dell'introduzione del modello relativo al Green Office e dell'illustrazione degli esempi presi in esame, è stato sfruttato lo strumento della feasibility analysis al fine di giudicare se l'idea di business sia praticabile. Il mezzo con cui ho condotto l'analisi sotto riportata è un questionario relativo al modello di Green Office implementato, nel quale viene chiesto di valutare gli aspetti relativi alla organizational feasibility e alla financial feasibility. Infine nella sezione finale ho considerato i Green Office come fossero un unico movimento. L'analisi mira a considerare l'impatto globale del Green Office Movement nei sistemi universitari e come, a seguito del loro consolidarsi nella struttura accademica, possano divenire prassi comune. La struttura proposta contiene elementi sia da il SNM (Strategic Niche Management) che dal TE (Transition Experiment).
Resumo:
A growing world population, changing climate and limiting fossil fuels will provide new pressures on human production of food, medicine, fuels and feed stock in the twenty-first century. Enhanced crop production promises to ameliorate these pressures. Crops can be bred for increased yields of calories, starch, nutrients, natural medicinal compounds, and other important products. Enhanced resistance to biotic and abiotic stresses can be introduced, toxins removed, and industrial qualities such as fibre strength and biofuel per mass can be increased. Induced and natural mutations provide a powerful method for the generation of heritable enhanced traits. While mainly exploited in forward, phenotype driven, approaches, the rapid accumulation of plant genomic sequence information and hypotheses regarding gene function allows the use of mutations in reverse genetic approaches to identify lesions in specific target genes. Such gene-driven approaches promise to speed up the process of creating novel phenotypes, and can enable the generation of phenotypes unobtainable by traditional forward methods. TILLING (Targeting Induced Local Lesions IN Genome) is a high-throughput and low cost reverse genetic method for the discovery of induced mutations. The method has been modified for the identification of natural nucleotide polymorphisms, a process called Ecotilling. The methods are general and have been applied to many species, including a variety of different crops. In this chapter the current status of the TILLING and Ecotilling methods and provide an overview of progress in applying these methods to different plant species, with a focus on work related to food production for developing nations.
Resumo:
OBJECTIVE: This study aimed to assess the potential cost-effectiveness of testing patients with nephropathies for the I/D polymorphism before starting angiotensin-converting enzyme (ACE) inhibitor therapy, using a 3-year time horizon and a healthcare perspective. METHODS: We used a combination of a decision analysis and Markov modeling technique to evaluate the potential economic value of this pharmacogenetic test by preventing unfavorable treatment in patients with nephropathies. The estimation of the predictive value of the I/D polymorphism is based on a systematic review showing that DD carriers tend to respond well to ACE inhibitors, while II carriers seem not to benefit adequately from this treatment. Data on the ACE inhibitor effectiveness in nephropathy were derived from the REIN (Ramipril Efficacy in Nephropathy) trial. We calculated the number of patients with end-stage renal disease (ESRD) prevented and the differences in the incremental costs and incremental effect expressed as life-years free of ESRD. A probabilistic sensitivity analysis was conducted to determine the robustness of the results. RESULTS: Compared with unselective treatment, testing patients for their ACE genotype could save 12 patients per 1000 from developing ESRD during the 3 years covered by the model. As the mean net cost savings was euro 356,000 per 1000 patient-years, and 9 life-years free of ESRD were gained, selective treatment seems to be dominant. CONCLUSION: The study suggests that genetic testing of the I/D polymorphism in patients with nephropathy before initiating ACE therapy will most likely be cost-effective, even if the risk for II carriers to develop ESRD when treated with ACE inhibitors is only 1.4% higher than for DD carriers. Further studies, however, are required to corroborate the difference in treatment response between ACE genotypes, before genetic testing can be justified in clinical practice.
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.