872 resultados para Android, Veicolo Elettrico, EV, Range Anxiety, Internet of Energy
Resumo:
The increasing dependency of everyday life on mobile devices also increases the number and complexity of computing tasks to be supported by these devices. However, the inherent requirement of mobility restricts them from being resources rich both in terms of energy (battery capacity) and other computing resources such as processing capacity, memory and other resources. This thesis looks into cyber foraging technique of offloading computing tasks. Various experiments on android mobile devices are carried out to evaluate offloading benefits in terms of sustainability advantage, prolonging battery life and augmenting the performance of mobile devices. This thesis considers two scenarios of cyber foraging namely opportunistic offloading and competitive offloading. These results show that the offloading scenarios are important for both green computing and resource augmentation of mobile devices. A significant advantage in battery life gain and performance enhancement is obtained. Moreover, cyber foraging is proved to be efficient in minimizing energy consumption per computing tasks. The work is based on scavenger cyber foraging system. In addition, the work can be used as a basis for studying cyber foraging and other similar approaches such as mobile cloud/edge computing for internet of things devices and improving the user experiences of applications by minimizing latencies through the use of potential nearby surrogates.
Resumo:
Transient capacitance methods were applied to the depletion region of an abrupt asymmetric n(+) -p junction of silicon and unintentionally doped poly[2-methoxy, 5 ethyl (2' hexyloxy) paraphenylenevinylene] (MEH-PPV). Studies in the temperature range 100-300 K show the presence of a majority-carrier trap at 1.0 eV and two minority traps at 0.7 and 1.3 eV, respectively. There is an indication for more levels for which the activation energy could not be determined. Furthermore, admittance data reveal a bulk activation energy for conduction of 0.12 eV, suggesting the presence of an additional shallow acceptor state. (C) 1999 American Institute of Physics. [S0003-6951(99)02308-6].
Resumo:
The purpose of this study was to examine the reliability and validity of the School Anxiety Inventory (SAI) using a sample of 646 Slovenian adolescents (48% boys), ranging in age from 12 to 19 years. Single confirmatory factor analyses replicated the correlated four-factor structure of scores on the SAI for anxiety-provoking school situations (Anxiety about School Failure and Punishment, Anxiety about Aggression, Anxiety about Social Evaluation, and Anxiety about Academic Evaluation), and the three-factor structure of the anxiety response systems (Physiological Anxiety, Cognitive Anxiety, and Behavioral Anxiety). Equality of factor structures was compared using multigroup confirmatory factor analyses. Measurement invariance for the four- and three-factor models was obtained across gender and school-level samples. The scores of the instrument showed high internal reliability and adequate test–retest reliability. The concurrent validity of the SAI scores was also examined through its relationship with the Social Anxiety Scale for Adolescents (SASA) scores and the Questionnaire about Interpersonal Difficulties for Adolescents (QIDA) scores. Correlations of the SAI scores with scores on the SASA and the QIDA were of low to moderate effect sizes.
Resumo:
The electronic conduction of thin-film field-effect-transistors (FETs) of sexithiophene was studied. In most cases the transfer curves deviate from standard FET theory; they are not linear, but follow a power law instead. These results are compared to conduction models of "variable-range hopping" and "multi-trap-and-release". The accompanying IV curves follow a Poole-Frenkel (exponential) dependence on the drain voltage. The results are explained assuming a huge density of traps. Below 200 K, the activation energy for conduction was found to be ca. 0.17 eV. The activation energies of the mobility follow the Meyer-Neldel rule. A sharp transition is seen in the behavior of the devices at around 200 K. The difference in behavior of a micro-FET and a submicron FET is shown. (C) 2004 American Institute of Physics.
Resumo:
Transient capacitance methods were applied to the depletion region of an abrupt asymmetric n(+) -p junction of silicon and unintentionally doped poly[2-methoxy, 5 ethyl (2' hexyloxy) paraphenylenevinylene] (MEH-PPV). Studies in the temperature range 100-300 K show the presence of a majority-carrier trap at 1.0 eV and two minority traps at 0.7 and 1.3 eV, respectively. There is an indication for more levels for which the activation energy could not be determined. Furthermore, admittance data reveal a bulk activation energy for conduction of 0.12 eV, suggesting the presence of an additional shallow acceptor state. (C) 1999 American Institute of Physics. [S0003-6951(99)02308-6].
Resumo:
Acoustic telemetry and standard tag-recapture were used to determine the home range and residency of juveniles and sub-adults of Diplodus sargus and Diplodus vulgaris in the Ria Formosa (Portugal) coastal lagoon. Maximum time between recaptures for the standard tag-recapture method was 128 days for D. sargus and 30 days for D. vulgaris. The majority of the fish were recaptured in the vicinity of the tagging location. Fish tagged with acoustic transmitters had a maximum period of time between first and last detections of 62 days for D. sargus and 260 days for D. vulgaris. Minimum convex polygons areas ranged between 148 024 m(2) and 525 930 m(2) for D. sargus and between 23 786 m(2) and 42134 m(2) for D. vulgaris. Both species presented a high residency index between first and last detections. Two D. sargus tagged with acoustic tags were recaptured by fishermen outside the coastal lagoon at distances of 12 km and 90 km from the tagging position, providing evidence that this species leaves the Ria Formosa during the winter time for the adjacent coastal waters. The results of this study reinforce the importance of Ria Formosa as a nursery for D. sargus and D. vulgaris in the south coast of Portugal. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This study investigates variation in IT professionals' experience of ethics with a view to enhancing their formation and support. This is explored through an examination of the experience of IT, IT professional ethics and IT professional ethics education. The study's principal contribution is the empirical study and description of IT professionals' experience of ethics. The empirical phase is preceded by a review of conceptions of IT and followed by an application of the findings to IT education. The study's empirical findings are based on 30 semi-structured interviews with IT professionals who represent a wide demographic, experience and IT sub-discipline range. Their experience of ethics is depicted as five citizenships: Citizenship of my world, Citizenship of the corporate world, Citizenship of a shared world, Citizenship of the client's world and Citizenship of the wider world. These signify an expanding awareness, which progressively accords rights to others and defines responsibility in terms of others. The empirical findings inform a Model of Ethical IT. This maps an IT professional space increasingly oriented towards others. Such a model provides a conceptual tool, available to prompt discussion and reflection, and which may be employed in pursuing formation aimed at experiential change. Its usefulness for the education of IT professionals with respect to ethics is explored. The research approach employed in this study is phenomenography. This method seeks to elicit and represent variation of experience. It understands experience as a relationship between a subject (IT professionals) and an object (ethics), and describes this relationship in terms of its foci and boundaries. The study's findings culminate in three observations, that change is indicated in the formation and support of IT professionals in: 1. IT professionals' experience of their discipline, moving towards a focus on information users; 2. IT professionals' experience of professional ethics, moving towards the adoption of other-centred attitudes; and 3. IT professionals' experience of professional development, moving towards an emphasis on a change in lived experience. Based on these results, employers, educators and professional bodies may want to evaluate how they approach professional formation and support, if they aim to promote a comprehensive awareness of ethics in IT professionals.
Resumo:
Although anxiety disorders are documented in the literature for new mothers (but less so for fathers), rates of postpartum caseness tend to include only those with depression when diagnostic interviews or self-report measures validated on such interviews are used. This methodology therefore underestimates the true percentage of women and men who experience significant psychological difficulties postpartum. This has implications for assessment, treatment and screening for postnatal mood disorders. Two studies were conducted on a total of 408 women and 356 men expecting their first child. They were recruited antenatally, and interviewed at 6 weeks postpartum using the Diagnostic Interview Schedule. DSM-IV criteria were applied to determine the presence since birth of depression (major or minor), panic disorder, acute adjustment disorder with anxiety (meeting the criteria for generalised anxiety disorder except for the duration criterion), and phobia. The inclusion of diagnostic assessment for panic disorder and acute adjustment disorder with anxiety increased the rates of caseness by between 57 and 100% for mothers, and 31-130% for fathers, over the rates for major or minor depression. Inclusion of assessment for phobia further increased the rates of disorder in both samples.
Resumo:
Local climate is a critical element in the design of energy efficient buildings. In this paper, ten years of historical weather data in Australia's eight capital cities were profiled and analysed to characterize the variations of climatic variables in Australia. The method of descriptive statistics was employed. Either the pattern of cumulative distribution and/or the profile of percentage distribution are presented. It was found that although weather variables vary with different locations, there is often a good, nearly linear relation between a weather variable and its cumulative percentage for the majority of middle part of the cumulative curves. By comparing the slopes of these distribution profiles, it may be possible to determine the relative range of changes of the particular weather variables for a given city. The implications of these distribution profiles of key weather variables on energy efficient building design are also discussed.
Resumo:
The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.
Resumo:
This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.
Resumo:
SAP and its research partners have been developing a lan- guage for describing details of Services from various view- points called the Unified Service Description Language (USDL). At the time of writing, version 3.0 describes technical implementation aspects of services, as well as stakeholders, pricing, lifecycle, and availability. Work is also underway to address other business and legal aspects of services. This language is designed to be used in service portfolio management, with a repository of service descriptions being available to various stakeholders in an organisation to allow for service prioritisation, development, deployment and lifecycle management. The structure of the USDL metadata is specified using an object-oriented metamodel that conforms to UML, MOF and EMF Ecore. As such it is amenable to code gener-ation for implementations of repositories that store service description instances. Although Web services toolkits can be used to make these programming language objects available as a set of Web services, the practicalities of writing dis- tributed clients against over one hundred class definitions, containing several hundred attributes, will make for very large WSDL interfaces and highly inefficient “chatty” implementations. This paper gives the high-level design for a completely model-generated repository for any version of USDL (or any other data-only metamodel), which uses the Eclipse Modelling Framework’s Java code generation, along with several open source plugins to create a robust, transactional repository running in a Java application with a relational datastore. However, the repository exposes a generated WSDL interface at a coarse granularity, suitable for distributed client code and user-interface creation. It uses heuristics to drive code generation to bridge between the Web service and EMF granularities.
Resumo:
This chapter sets out to identify related issues surrounding the use of Information and Computer Technology (ICT) in developing relationships between local food producers and consumers (both individuals and businesses). Three surveys were conducted in South- East Wales to consider the overlapping issues. The first concerned the role of ICT in relationships between farmers’ market (FMs) vendors and their traditional customers. The second survey examined potential new markets for farmers in the propensity of restaurants and hotels to buy locally, the types and sources of purchases made and the modes of advertising of these businesses. The final survey focused on the potential to expand local web- based selling of farmers’ produce in the future, by examining the potential market of high ICT- use small hotels. Despite the development of tailored ICT facilities, farmers’ market vendors and current individual customers are antipathetic to them. In addition, whilst there is a desire for more local produce particularly amongst independent local restaurants and hotels, this has not been capitalised upon and there is much work to be done even amongst high ICT-use small hotels, to expand the range and scope of farmers’ markets. This raises the need for creation and utilisation of enhanced logistics, payment and marketing management capacity available through a web- based presence, linked to promotion of FMs in business- to- business (B2B) links with local restaurants and hotels. This linked quantitative research highlights the potential value in substantial development of both web portals and supporting logistics to exploit this potential in the future.
Resumo:
Abstract OBJECTIVE: Depression, anxiety and alcohol misuse frequently co-occur. While there is an extensive literature reporting on the efficacy of psychological treatments that target depression, anxiety or alcohol misuse separately, less research has examined treatments that address these disorders when they co-occur. We conducted a systematic review to determine whether psychological interventions that target alcohol misuse among people with co-occurring depressive or anxiety disorders are effective. DATA SOURCES: We systematically searched the PubMed and PsychINFO databases from inception to March 2010. Individual searches in alcohol, depression and anxiety were conducted, and were limited to 'human' published 'randomized controlled trials' or 'sequential allocation' articles written in English. STUDY SELECTION: We identified randomized controlled trials that compared manual guided psychological interventions for alcohol misuse among individuals with depressive or anxiety disorders. Of 1540 articles identified, eight met inclusion criteria for the review. DATA EXTRACTION: From each study, we recorded alcohol and mental health outcomes, and other relevant clinical factors including age, gender ratio, follow-up length and drop-out rates. Quality of studies was also assessed. DATA SYNTHESIS: Motivational interviewing and cognitive-behavioral interventions were associated with significant reductions in alcohol consumption and depressive and/or anxiety symptoms. Although brief interventions were associated with significant improvements in both mental health and alcohol use variables, longer interventions produced even better outcomes. CONCLUSIONS: There is accumulating evidence for the effectiveness of motivational interviewing and cognitive behavior therapy for people with co-occurring alcohol and depressive or anxiety disorders.