841 resultados para low and medium-low technology industries
Resumo:
With the deepening development of oil-gas exploration and the sharp rise in costs, modern seismic techniques had been progressed rapidly. The Seismic Inversion Technique extracts seismic attribute from the seismic reflection data, inverses the underground distribution of wave impedance or speed, estimates reservoir parameters, makes some reservoir prediction and oil reservoir description as a key technology of Seismic exploration, which provides a reliable basic material for oil-gas exploration. Well-driven SI is essentially an seismic-logging joint inversion. The low, high-frequency information comes from the logging information, while the structural characteristics and medium frequency band depend on the seismic data. Inversion results mainly depend on the quality of raw data, the rationality of the process, the relativity of synthetic and seismic data, etc. This paper mainly research on how the log-to-seismic correlation have affected the well-driven seismic inversion precision. Synthetic, the comparison between middle –frequency borehole impedance and relative seismic impedance and well-attribute crossplots have been taken into account the log-to-seismic correlation. The results verify that the better log-to-seismic correlation, the more reliable the seismic inversion result, through the analysis of three real working area (Qikou Sag, Qiongdongnan basin, Sulige gas field).
Resumo:
Superfine mineral materials are mainly resulted from the pulverization of natural mineral resources, and are a type of new materials that can replace traditional materials and enjoy the most extensive application and the highest degree of consumption in the present day market. As a result, superfine mineral materials have a very broad and promising prospect in terms of market potential. Superfine pulverization technology is the only way for the in-depth processing of most of the traditional materials, and is also one of the major means for which mineral materials can realize their application. China is rich in natural resources such as heavy calcite, kaolin, wollastonite, etc., which enjoy a very wide market of application in paper making, rubber, plastics, painting, coating, medicine, environment-friendly recycle paper and fine chemical industries, for example. However, because the processing of these resources is generally at the low level, economic benefit and scale for the processing of these resources have not been realized to their full potential even up to now. Big difference in product indices and superfine processing equipment and technologies between China and advanced western countries still exists. Based on resource assessment and market potential analysis, an in-depth study was carried out in this paper about the superfine pulverization technology and superfine pulverized mineral materials from the point of mineralogical features, determination of processing technologies, analytical methods and applications, by utilizing a variety of modern analytical methods in mineralogy, superfine pulverization technology, macromolecular chemistry, material science and physical chemistry together with computer technology and so on. The focus was placed on the innovative study about the in-depth processing technology and the processing apparatus for kaolin and heavy calcite as well as the application of superfine products. The main contents and the major achievements of this study are listed as follows: 1. Superfine pulverization processing of mineral materials shall be integrated with the study of their crystal structures and chemical composition. And special attention shall be put on the post-processing technologies, rather than on the indices for particle size, of these materials, based on their fields of application. Both technical feasibility and economic feasibility shall be taken into account for the study about superfine pulverization technologies, since these two kinds of feasibilities serve as the premise for the industrialized application of superfine pulverized mineral materials. Based on this principle, preposed chemical treatment method, technology of synchronized superfine pulverization and gradation, processing technology and apparatus of integrated modification and depolymerization were utilized in this study, and narrow distribution in terms of particle size, good dispersibility, good application effects, low consumption as well as high effectiveness of superfine products were achieved in this study. Heavy calcite and kaolin are two kinds of superfine mineral materials that enjoy the highest consumption in the industry. Heavy calcite is mainly applied in paper making, coating and plastics industries, the hard kaolin in northern China is mainly used in macromolecular materials and chemical industries, while the soft kaolin in southern China is mainly used for paper making. On the other hand, superfine pulverized heavy calcite and kaolin can both be used as the functional additives to cement, a kind of material that enjoys the biggest consumption in the world. A variety of analytical methods and instruments such as transmission and scanning electron microscopy, X-ray diffraction analysis, infrared analysis, laser particle size analysis and so on were applied for the elucidation of the properties and the mechanisms for the functions of superfine mineral materials as used in plastics and high-performance cement. Detection of superfine mineral materials is closely related to the post-processing and application of these materials. Traditional detection and analytical methods for superfine mineral materials include optical microscopy, infrared spectral analysis and a series of microbeam techniques such as transmission and scanning electron microscopy, X-ray diffraction analysis, and so on. In addition to these traditional methods, super-weak luminescent photon detection technology of high precision, high sensitivity and high signal to noise ratio was also utilized by the author for the first time in the study of superfine mineral materials, in an attempt to explore a completely new method and means for the study of the characterization of superfine materials. The experimental results are really exciting! The innovation of this study is represented in the following aspects: 1. In this study, preposed chemical treatment method, technology of synchronized superfine pulverization and gradation, processing technology and apparatus of integrated modification and depolymerization were utilized in an innovative way, and narrow distribution in terms of particle size, good dispersibility, good application effects, low consumption as well as high effectiveness of superfine products were achieved in the industrialized production process*. Moreover, a new modification technology and related directions for producing the chemicals were invented, and the modification technology was even awarded a patent. 2. The detection technology of super-weak luminescent photon of high precision, high sensitivity and high signal to noise ratio was utilized for the first time in this study to explore the superfine mineral materials, and the experimental results can be compared with those acquired with scanning electron microscopy and has demonstrated its unique advantages. It can be expected that further study may possibly help to result in a completely new method and means for the characterization of superfine materials. 3. During the heating of kaolinite and its decomposition into pianlinite, the diffraction peaks disappear gradually. First comes the disappearance of the reflection of the basal plane (001), and then comes the slow disappearance of the (hkl) diffraction peaks. And this was first discovered during the experiments by the author, and it has never before reported by other scholars. 4. The first discovery of the functions that superfine mineral materials can be used as dispersants in plastics, and the first discovery of the comprehensive functions that superfine mineral materials can also be used as activators, water-reducing agents and aggregates in high-performance cement were made in this study, together with a detailed discussion. This study was jointly supported by two key grants from Guangdong Province for Scientific and Technological Research in the 10th Five-year Plan Period (1,200,000 yuan for Preparation technology, apparatus and post-processing research by using sub-micron superfine pulverization machinery method, and 300,000 yuan for Method and instruments for biological photon technology in the characterization of nanometer materials), and two grants from Guangdong Province for 100 projects for scientific and technological innovation (700,000 yuan for Pilot experimentation of superfine and modified heavy calcite used in paper-making, rubber and plastics industry, and 400,000 yuan for Study of superfine, modified wollastonite of large length-to-diameter ratio).
Resumo:
Kela-2 gas field in Tarim Basin is the main supply source for West-to-East Pipeline project, also the largest abnormally-pressured gas field discovered in China currently. The geological characterization, fine geological modeling and field development plan are all the world-class difficult problems. This work includes an integrated geological and gas reservoir engineering study using advanced technology and approaches, the scientific development plan of Kela-2 gas field as well as the optimizations of the drilling, production and surface schemes. Then, it's expected that the Kela-2 gas field can be developed high-efficiently.Kuche depression is one part of the thrust belt of the South Tianshan Mountains, Kela-2 field is located at the Kelasu structural zone in the north of Kuche depression. The field territory is heavily rugged with deeply cut gullies, complex geological underground structure, variable rock types, thrust structure development. Therefore, considerable efforts have been made to develop an integrated technique to acquire, process and interpret the seismic data in complicated mountain region. Consequently a set of seismic-related techniques in the complicated mountain region has been developed and successfully utilized to interpret the structure of Kela-2 gas field.The main reservoir depositional system of Kela 2 gas field is a platform - fan delta - braided river system. The reservoir rocks are medium-fine and extremely fine grained sandstones with high structure maturity and low composition maturity. The pore system structure is featured by medium-small pore, medium-fine throat and medium-low assortment. The reservoir of Kela-2 gas field is characteristic of medium porosity and medium permeability. The pay zone is very thick and its lateral distribution is stable with a good connection of sand body. The overpressure is caused mainly by the strongly tectonic squash activities, and other factors including the later rapid raise and compartment of the high-pressure fluid, the injection of high-pressure fluid into the reservoir.Based on the deliverability tests available, the average binomial deliverability equation is provided applicable for the overall field. The experimental results of rock stress-sensitive tests are employed to analyze the change trend of petrophysical properties against net confining stress, and establish the stress-based average deliverability equation. The results demonstrate the effect of rock deformation on the deliverability is limited to less than 5% in the early period of Kela-2 gas field, indicating the insignificant effect on deliverability of rock deformation.In terms of the well pattern comparisons and development planning optimizations, it is recommended that the producers should be located almost linearly along the structural axis. A total of 9 producers have a stable gas supply volume of 10.76 BCMPY for 17 years. For Kela-2 gas field the total construction investment is estimated at ¥7,697,690,000 RMB with the internal earning rate of 25.02% after taxation, the net present value of ¥7,420,160,000 RMB and the payback period of 5.66 years. The high profits of this field development project are much satisfactory.
Resumo:
Background: Until recently, little was known about the costs of the HIV/AIDS epidemic to businesses in Africa and business responses to the epidemic. This paper synthesizes the results of a set of studies conducted between 1999 and 2006 and draws conclusions about the role of the private sector in Africa’s response to AIDS. Methods: Detailed human resource, financial, and medical data were collected from 14 large private and parastatal companies in South Africa, Uganda, Kenya, Zambia, and Ethiopia. Surveys of small and medium-sized enterprises (SMEs) were conducted in South Africa, Kenya, and Zambia. Large companies’ responses or potential responses to the epidemic were investigated in South Africa, Uganda, Kenya, Zambia, and Rwanda. Results: Among the large companies, estimated workforce HIV prevalence ranged from 5%¬37%. The average cost per employee lost to AIDS varied from 0.5-5.6 times the average annual compensation of the employee affected. Labor cost increases as a result of AIDS were estimated at anywhere from 0.6%-10.8% but exceeded 3% at only 2 of 14 companies. Treatment of eligible employees with ART at a cost of $360/patient/year was shown to have positive financial returns for most but not all companies. Uptake of employer-provided testing and treatment services varied widely. Among SMEs, HIV prevalence in the workforce was estimated at 10%-26%. SME managers consistently reported low AIDS-related employee attrition, little concern about the impacts of AIDS on their companies, and relatively little interest in taking action, and fewer than half had ever discussed AIDS with their senior staff. AIDS was estimated to increase the average operating costs of small tourism companies in Zambia by less than 1%; labor cost increases in other sectors were probably smaller. Conclusions: Although there was wide variation among the firms studied, clear patterns emerged that will permit some prediction of impacts and responses in the future.
Resumo:
This thesis explores the drivers of innovation in Irish high-technology businesses and estimates, in particular, the relative importance of interaction with external businesses and other organisations as a source of knowledge for innovation at the business-level. The thesis also examines the extent to which interaction for innovation in these businesses occurs on a local or regional basis. The study uses original survey data of 184 businesses in the Chemical and Pharmaceutical, Information and Communications Technology and Engineering and Electronic Devices sectors. The study considers both product and process innovation at the level of the business and develops new measures of innovation output. For the first time in an Irish study, the incidence and frequency of interaction is measured for each of a range of agents, other group companies, suppliers, customers, competitors, academic-based researchers and innovation-supporting agencies. The geographic proximity between the business and each of the most important of each of each category of agent is measured using average one-way driving distance, which is the first time such a measure has been used in an Irish study of innovation. Utilising econometric estimation techniques, it is found that interaction with customers, suppliers and innovation-supporting agencies is positively associated with innovation in Irish high-technology businesses. Surprisingly, however, interaction with academic-based researchers is found to have a negative effect on innovation output at the business-level. While interaction generally emerges as a positive influence on business innovation, there is little evidence that this occurs at a local or regional level. Furthermore, there is little support for the presence of localisation economies for high-technology sectors, though some tentative evidence of urbanisation economies. This has important implications for Irish regional, enterprise and innovation policy, which has emphasised the development of clusters of internationally competitive businesses. The thesis brings into question the suitability of a cluster-driven network based approach to business development and competitiveness in an Irish context.
Resumo:
In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.
Resumo:
OBJECTIVES: This study compared LDL, HDL, and VLDL subclasses in overweight or obese adults consuming either a reduced carbohydrate (RC) or reduced fat (RF) weight maintenance diet for 9 months following significant weight loss. METHODS: Thirty-five (21 RC; 14 RF) overweight or obese middle-aged adults completed a 1-year weight management clinic. Participants met weekly for the first six months and bi-weekly thereafter. Meetings included instruction for diet, physical activity, and behavior change related to weight management. Additionally, participants followed a liquid very low-energy diet of approximately 2092 kJ per day for the first three months of the study. Subsequently, participants followed a dietary plan for nine months that targeted a reduced percentage of carbohydrate (approximately 20%) or fat (approximately 30%) intake and an energy intake level calculated to maintain weight loss. Lipid subclasses using NMR spectroscopy were analyzed prior to weight loss and at multiple intervals during weight maintenance. RESULTS: Body weight change was not significantly different within or between groups during weight maintenance (p>0.05). The RC group showed significant increases in mean LDL size, large LDL, total HDL, large and small HDL, mean VLDL size, and large VLDL during weight maintenance while the RF group showed increases in total HDL, large and small HDL, total VLDL, and large, medium, and small VLDL (p<0.05). Group*time interactions were significant for large and medium VLDL (p>0.05). CONCLUSION: Some individual lipid subclasses improved in both dietary groups. Large and medium VLDL subclasses increased to a greater extent across weight maintenance in the RF group.
Resumo:
In the frame of the European Project on Ocean Acidification (EPOCA), the response of an Arctic pelagic community (<3 mm) to a gradient of seawater pCO(2) was investigated. For this purpose 9 large-scale in situ mesocosms were deployed in Kongsfjorden, Svalbard (78 degrees 56.2' N, 11 degrees 53.6' E), in 2010. The present study investigates effects on the communities of particle-attached (PA; >3 mu m) and free-living (FL; <3 mu m > 0.2 mu m) bacteria by Automated Ribosomal Intergenic Spacer Analysis (ARISA) in 6 of the mesocosms, ranging from 185 to 1050 mu atm initial pCO(2), and the surrounding fjord. ARISA was able to resolve, on average, 27 bacterial band classes per sample and allowed for a detailed investigation of the explicit richness and diversity. Both, the PA and the FL bacterioplankton community exhibited a strong temporal development, which was driven mainly by temperature and phytoplankton development. In response to the breakdown of a picophytoplankton bloom, numbers of ARISA band classes in the PA community were reduced at low and medium CO2 (similar to 185-685 mu atm) by about 25 %, while they were more or less stable at high CO2 (similar to 820-1050 mu atm). We hypothesise that enhanced viral lysis and enhanced availability of organic substrates at high CO2 resulted in a more diverse PA bacterial community in the post-bloom phase. Despite lower cell numbers and extracellular enzyme activities in the post-bloom phase, bacterial protein production was enhanced in high CO2 mesocosms, suggesting a positive effect of community richness on this function and on carbon cycling by bacteria.
Resumo:
Abstract
Background: Automated closed loop systems may improve adaptation of the mechanical support to a patient's ventilatory needs and
facilitate systematic and early recognition of their ability to breathe spontaneously and the potential for discontinuation of
ventilation.
Objectives: To compare the duration of weaning from mechanical ventilation for critically ill ventilated adults and children when managed
with automated closed loop systems versus non-automated strategies. Secondary objectives were to determine differences
in duration of ventilation, intensive care unit (ICU) and hospital length of stay (LOS), mortality, and adverse events.
Search methods: We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2011, Issue 2); MEDLINE (OvidSP) (1948 to August 2011); EMBASE (OvidSP) (1980 to August 2011); CINAHL (EBSCOhost) (1982 to August 2011); and the Latin American and Caribbean Health Sciences Literature (LILACS). In addition we received and reviewed auto-alerts for our search strategy in MEDLINE, EMBASE, and CINAHL up to August 2012. Relevant published reviews were sought using the Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessment Database (HTA Database). We also searched the Web of Science Proceedings; conference proceedings; trial registration websites; and reference lists of relevant articles.
Selection criteria: We included randomized controlled trials comparing automated closed loop ventilator applications to non-automated weaning
strategies including non-protocolized usual care and protocolized weaning in patients over four weeks of age receiving invasive mechanical ventilation in an intensive care unit (ICU).
Data collection and analysis: Two authors independently extracted study data and assessed risk of bias. We combined data into forest plots using random-effects modelling. Subgroup and sensitivity analyses were conducted according to a priori criteria.
Main results: Pooled data from 15 eligible trials (14 adult, one paediatric) totalling 1173 participants (1143 adults, 30 children) indicated that automated closed loop systems reduced the geometric mean duration of weaning by 32% (95% CI 19% to 46%, P =0.002), however heterogeneity was substantial (I2 = 89%, P < 0.00001). Reduced weaning duration was found with mixed or
medical ICU populations (43%, 95% CI 8% to 65%, P = 0.02) and Smartcare/PS™ (31%, 95% CI 7% to 49%, P = 0.02) but not in surgical populations or using other systems. Automated closed loop systems reduced the duration of ventilation (17%, 95% CI 8% to 26%) and ICU length of stay (LOS) (11%, 95% CI 0% to 21%). There was no difference in mortality rates or hospital LOS. Overall the quality of evidence was high with the majority of trials rated as low risk.
Authors' conclusions: Automated closed loop systems may result in reduced duration of weaning, ventilation, and ICU stay. Reductions are more
likely to occur in mixed or medical ICU populations. Due to the lack of, or limited, evidence on automated systems other than Smartcare/PS™ and Adaptive Support Ventilation no conclusions can be drawn regarding their influence on these outcomes. Due to substantial heterogeneity in trials there is a need for an adequately powered, high quality, multi-centre randomized
controlled trial in adults that excludes 'simple to wean' patients. There is a pressing need for further technological development and research in the paediatric population.
Resumo:
We report four repetitions of Falk and Kosfeld's (Am. Econ. Rev. 96(5):1611-1630, 2006) low and medium control treatments with 476 subjects. Each repetition employs a sample drawn from a standard subject pool of students and demographics vary across samples. We largely confirm the existence of hidden costs of control but, contrary to the original study, hidden costs of control are usually not substantial enough to significantly undermine the effectiveness of economic incentives. Our subjects were asked, at the end of the experimental session, to complete a questionnaire in which they had to state their work motivation in hypothetical scenarios. Our questionnaires are identical to the ones administered in Falk and Kosfeld's (Am. Econ. Rev. 96(5):1611-1630, 2006) questionnaire study. In contrast to the game play data, our questionnaire data are similar to those of the original questionnaire study. In an attempt to solve this puzzle, we report an extension with 228 subjects where performance-contingent earnings are absent i.e. both principals and agents are paid according to a flat participation fee. We observe that hidden costs significantly outweigh benefits of control under hypothetical incentives.
Resumo:
Background: Combination drug products can display thermal behaviour that is more complex than for the corresponding single drug products. For example, the contraceptive vaginal ring (VR) Nuvaring contains a eutectic (lowest melting) composition of etonogestrel (ETN) and ethinyl estradiol. Here we report the predisposition of dapivirine (DPV) to form reduced melting/eutectic mixtures when combined with other contraceptive hormones and antiretrovirals, and discuss the implications for development of combination microbicide and multipurpose prevention technology (MPT) products.
Methods: Binary mixtures of DPV with darunavir (DRV), levonorgestrel (LNG), ETN or maraviroc (MVC) were prepared either by physical mixing or by solvent evaporation. Selected binary mixtures were also incorporated into silicone elastomer (SE) VR devices. Thermal behavior of the mixtures was analyzed using differential scanning calorimetry (DSC) operating in standard heating ramp mode (10 °C/min). DSC data were used to construct two component phase diagrams for each binary system.
Results: Drug mixtures typically showed reduced melting transitions for both drug components, with clear evidence for a eutectic mixture at a well-defined intermediate composition. Eutectic temperatures and compositions for the various mixtures were: 40% DPV / 60% ETN - 170°C; 25% DPV / 75% MVC - 172°C; 65% DPV / 35% LNG - 192°C. In each case, the eutectic composition was also detected when the drug mixtures were incorporated into SE VRs. For the DPV/DRV system, the thermal behaviour is complicated by desolvation from the darunavir ethanolate polymorph.
Conclusions: When DPV is combined with small molecular weight hydrophobic drugs, the melting temperature for both drugs is typically reduced to a degree dependent on the composition of the mixture. At specified compositions, a low melting eutectic system results. The formation of eutectic behavior in binary drug systems needs to be carefully characterised in order to define product performance and drug release.
Resumo:
Objective: To systematically review the evidence examining effects of walking interventions on pain and self-reported function in individuals with chronic musculoskeletal pain.
Data Sources: Six electronic databases (Medline, CINAHL, PsychINFO, PEDro, Sport Discus and the Cochrane Central Register of Controlled Trials) were searched from January 1980 up to March 2014.
Study Selection: Randomized and quasi-randomized controlled trials in adults with chronic low back pain, osteoarthritis or fibromyalgia comparing walking interventions to a non-exercise or non-walking exercise control group.
Data Extraction: Data were independently extracted using a standardized form. Methodological quality was assessed using the United States Preventative Services Task Force (USPSTF) system.
Data Synthesis: Twenty-six studies (2384 participants) were included and suitable data from 17 were pooled for meta-analysis with a random effects model used to calculate between group mean differences and 95% confidence intervals. Data were analyzed according to length of follow-up (short-term: ≤8 weeks post randomization; medium-term: >2 months - 12 months; long-term: > 12 months). Interventions were associated with small to moderate improvements in pain at short (mean difference (MD) -5.31, 95% confidence interval (95% CI) -8.06 to -2.56) and medium-term follow-up (MD -7.92, 95% CI -12.37 to -3.48). Improvements in function were observed at short (MD -6.47, 95% CI -12.00 to -0.95), medium (MD -9.31, 95% CI -14.00 to -4.61) and long-term follow-up (MD -5.22, 95% CI 7.21 to -3.23).
Conclusions: Evidence of fair methodological quality suggests that walking is associated with significant improvements in outcome compared to control interventions but longer-term effectiveness is uncertain. Using the USPSTF system, walking can be recommended as an effective form of exercise or activity for individuals with chronic musculoskeletal pain but should be supplemented with strategies aimed at maintaining participation. Further work is also required examining effects on important health related outcomes in this population in robustly designed studies.
Resumo:
Post-traumatic stress, depression and anxiety symptoms are common outcomes following earthquakes, and may persist for months and years. This study systematically examined the impact of neighbourhood damage exposure and average household income on psychological distress and functioning in 600 residents of Christchurch, New Zealand, 4–6 months after the fatal February, 2011 earthquake. Participants were from highly affected and relatively unaffected suburbs in low, medium and high average household income areas. The assessment battery included the Acute Stress Disorder Scale, the depression module of the Patient Health Questionnaire (PHQ-9), and the Generalized Anxiety Disorder Scale (GAD-7), along with single item measures of substance use, earthquake damage and impact, and disruptions in daily life and relationship functioning. Controlling for age, gender and social isolation, participants from low income areas were more likely to meet diagnostic cut-offs for depression and anxiety, and have more severe anxiety symptoms. Higher probabilities of acute stress, depression and anxiety diagnoses were evident in affected versus unaffected areas, and those in affected areas had more severe acute stress, depression and anxiety symptoms. An interaction between income and earthquake effect was found for depression, with those from the low and medium income affected suburbs more depressed. Those from low income areas were more likely, post-earthquake, to start psychiatric medication and increase smoking. There was a uniform increase in alcohol use across participants. Those from the low income affected suburb had greater general and relationship disruption post-quake. Average household income and damage exposure made unique contributions to earthquake-related distress and dysfunction.
Resumo:
Tese de doutoramento, Educação (Tecnologias de Informação e Comunicação na Educação), Universidade de Lisboa, Instituto de Educação, 2015
Resumo:
In smart grids context, the distributed generation units based in renewable resources, play an important rule. The photovoltaic solar units are a technology in evolution and their prices decrease significantly in recent years due to the high penetration of this technology in the low voltage and medium voltage networks supported by governmental policies and incentives. This paper proposes a methodology to determine the maximum penetration of photovoltaic units in a distribution network. The paper presents a case study, with four different scenarios, that considers a 32-bus medium voltage distribution network and the inclusion storage units.