808 resultados para new web based frameworks
Resumo:
In this paper, a new model-based proportional–integral–derivative (PID) tuning and controller approach is introduced for Hammerstein systems that are identified on the basis of the observational input/output data. The nonlinear static function in the Hammerstein system is modelled using a B-spline neural network. The control signal is composed of a PID controller, together with a correction term. Both the parameters in the PID controller and the correction term are optimized on the basis of minimizing the multistep ahead prediction errors. In order to update the control signal, the multistep ahead predictions of the Hammerstein system based on B-spline neural networks and the associated Jacobian matrix are calculated using the de Boor algorithms, including both the functional and derivative recursions. Numerical examples are utilized to demonstrate the efficacy of the proposed approaches.
Resumo:
We present a new coefficient-based retrieval scheme for estimation of sea surface temperature (SST) from the Along Track Scanning Radiometer (ATSR) instruments. The new coefficients are banded by total column water vapour (TCWV), obtained from numerical weather prediction analyses. TCWV banding reduces simulated regional retrieval biases to < 0.1 K compared to biases ~ 0.2 K for global coefficients. Further, detailed treatment of the instrumental viewing geometry reduces simulated view-angle related biases from ~ 0.1 K down to < 0.005 K for dual-view retrievals using channels at 11 and 12 μm. A novel analysis of trade-offs related to the assumed noise level when defining coefficients is undertaken, and we conclude that adding a small nominal level of noise (0.01 K) is optimal for our purposes. When applied to ATSR observations, some inter-algorithm biases appear as TCWV-related differences in SSTs estimated from different channel combinations. The final step in coefficient determination is to adjust the offset coefficient in each TCWV band to match results from a reference algorithm. This reference uses the dual-view observations of 3.7 and 11 μm. The adjustment is independent of in situ measurements, preserving independence of the retrievals. The choice of reference is partly motivated by uncertainty in the calibration of the 12 μm of Advanced ATSR. Lastly, we model the sensitivities of the new retrievals to changes to TCWV and changes in true SST, confirming that dual-view SSTs are most appropriate for climatological applications
Resumo:
This paper introduces a new agent-based model, which incorporates the actions of individual homeowners in a long-term domestic stock model, and details how it was applied in energy policy analysis. The results indicate that current policies are likely to fall significantly short of the 80% target and suggest that current subsidy levels need re-examining. In the model, current subsidy levels appear to offer too much support to some technologies, which in turn leads to the suppression of other technologies that have a greater energy saving potential. The model can be used by policy makers to develop further scenarios to find alternative, more effective, sets of policy measures. The model is currently limited to the owner-occupied stock in England, although it can be expanded, subject to the availability of data.
Resumo:
Tropical Applications of Meteorology Using Satellite Data and Ground-Based Observations (TAMSAT) rainfall monitoring products have been extended to provide spatially contiguous rainfall estimates across Africa. This has been achieved through a new, climatology-based calibration, which varies in both space and time. As a result, cumulative estimates of rainfall are now issued at the end of each 10-day period (dekad) at 4-km spatial resolution with pan-African coverage. The utility of the products for decision making is improved by the routine provision of validation reports, for which the 10-day (dekadal) TAMSAT rainfall estimates are compared with independent gauge observations. This paper describes the methodology by which the TAMSAT method has been applied to generate the pan-African rainfall monitoring products. It is demonstrated through comparison with gauge measurements that the method provides skillful estimates, although with a systematic dry bias. This study illustrates TAMSAT’s value as a complementary method of estimating rainfall through examples of successful operational application.
Resumo:
It is necessary to minimize the environmental impact and utilize natural resources in a sustainable and efficient manner in the early design stage of developing an environmentally-conscious design for a heating, ventilating and air-conditioning system. Energy supply options play a significant role in the total environmental load of heating, ventilating and air-conditioning systems. To assess the environmental impact of different energy options, a new method based on Emergy Analysis is proposed. Emergy Accounting, was first developed and widely used in the area of ecological engineering, but this is the first time it has been used in building service engineering. The environmental impacts due to the energy options are divided into four categories under the Emergy Framework: the depletion of natural resources, the greenhouse effect (carbon dioxide equivalents), the chemical rain effect (sulphur dioxide equivalents), and anthropogenic heat release. The depletion of non-renewable natural resources is indicated by the Environmental Load Ratio, and the environmental carrying capacity is developed to represent the environmental service to dilute the pollutants and anthropogenic heat released. This Emergy evaluation method provides a new way to integrate different environmental impacts under the same framework and thus facilitates better system choices. A case study of six different kinds of energy options consisting of renewable and non-renewable energy was performed by using Emergy Theory, and thus their relative environmental impacts were compared. The results show that the method of electricity generation in energy sources, especially for electricity-powered systems, is the most important factor to determine their overall environmental performance. The direct-fired lithium-bromide absorption type consumes more non-renewable energy, and contributes more to the urban heat island effect compared with other options having the same electricity supply. Using Emergy Analysis, designers and clients can make better-informed, environmentally-conscious selections of heating, ventilating and air-conditioning systems.
Resumo:
Understanding Digital Literacies provides an accessible and timely introduction to new media literacies. It supplies readers with the theoretical and analytical tools with which to explore the linguistic and social impact of a host of new digital literacy practices. Each chapter in the volume covers a different topic, presenting an overview of the major concepts, issues, problems and debates surrounding the topic, while also encouraging students to reflect on and critically evaluate their own language and communication practices. Features include: coverage of a diverse range of digital media texts, tools and practices including blogging, hypertextual organisation, Facebook, Twitter, YouTube, Wikipedia, websites and games an extensive range of examples and case studies to illustrate each topic, such as how blogs have affected our thinking about communication, how the creation and sharing of digital images and video can bring about shifts in social roles, and how the design of multiplayer online games for children can promote different ideologies a variety of discussion questions and mini-ethnographic research projects involving exploration of various patterns of media production and communication between peers, for example in the context of Wikinomics and peer production, social networking and civic participation, and digital literacies at work end of chapter suggestions for further reading and links to key web and video resources a companion website providing supplementary material for each chapter, including summaries of key issues, additional web-based exercises, and links to further resources such as useful websites, articles, videos and blogs.
Resumo:
Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) on-board NOAA-18 and the Cloud Profiling Radar (CPR) on-board CloudSat. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.
Resumo:
Background: There is evidence that physical activity (PA) can attenuate the influence of the fat mass- and obesity-associated (FTO) genotype on the risk to develop obesity. However, whether providing personalized information on FTO genotype leads to changes in PA is unknown. Objective: The purpose of this study was to determine if disclosing FTO risk had an impact on change in PA following a 6-month intervention. Methods: The single nucleotide polymorphism (SNP) rs9939609 in the FTO gene was genotyped in 1279 participants of the Food4Me study, a four-arm, Web-based randomized controlled trial (RCT) in 7 European countries on the effects of personalized advice on nutrition and PA. PA was measured objectively using a TracmorD accelerometer and was self-reported using the Baecke questionnaire at baseline and 6 months. Differences in baseline PA variables between risk (AA and AT genotypes) and nonrisk (TT genotype) carriers were tested using multiple linear regression. Impact of FTO risk disclosure on PA change at 6 months was assessed among participants with inadequate PA, by including an interaction term in the model: disclosure (yes/no) × FTO risk (yes/no). Results: At baseline, data on PA were available for 874 and 405 participants with the risk and nonrisk FTO genotypes, respectively. There were no significant differences in objectively measured or self-reported baseline PA between risk and nonrisk carriers. A total of 807 (72.05%) of the participants out of 1120 in the personalized groups were encouraged to increase PA at baseline. Knowledge of FTO risk had no impact on PA in either risk or nonrisk carriers after the 6-month intervention. Attrition was higher in nonrisk participants for whom genotype was disclosed (P=.01) compared with their at-risk counterparts. Conclusions: No association between baseline PA and FTO risk genotype was observed. There was no added benefit of disclosing FTO risk on changes in PA in this personalized intervention. Further RCT studies are warranted to confirm whether disclosure of nonrisk genetic test results has adverse effects on engagement in behavior change.
Resumo:
The use of kilometre-scale ensembles in operational forecasting provides new challenges for forecast interpretation and evaluation to account for uncertainty on the convective scale. A new neighbourhood based method is presented for evaluating and characterising the local predictability variations from convective scale ensembles. Spatial scales over which ensemble forecasts agree (agreement scales, S^A) are calculated at each grid point ij, providing a map of the spatial agreement between forecasts. By comparing the average agreement scale obtained from ensemble member pairs (S^A(mm)_ij), with that between members and radar observations (S^A(mo)_ij), this approach allows the location-dependent spatial spread-skill relationship of the ensemble to be assessed. The properties of the agreement scales are demonstrated using an idealised experiment. To demonstrate the methods in an operational context the S^A(mm)_ij and S^A(mo)_ij are calculated for six convective cases run with the Met Office UK Ensemble Prediction System. The S^A(mm)_ij highlight predictability differences between cases, which can be linked to physical processes. Maps of S^A(mm)_ij are found to summarise the spatial predictability in a compact and physically meaningful manner that is useful for forecasting and for model interpretation. Comparison of S^A(mm)_ij and S^A(mo)_ij demonstrates the case-by-case and temporal variability of the spatial spread-skill, which can again be linked to physical processes.
Resumo:
This paper describes a novel template-based meshing approach for generating good quality quadrilateral meshes from 2D digital images. This approach builds upon an existing image-based mesh generation technique called Imeshp, which enables us to create a segmented triangle mesh from an image without the need for an image segmentation step. Our approach generates a quadrilateral mesh using an indirect scheme, which converts the segmented triangle mesh created by the initial steps of the Imesh technique into a quadrilateral one. The triangle-to-quadrilateral conversion makes use of template meshes of triangles. To ensure good element quality, the conversion step is followed by a smoothing step, which is based on a new optimization-based procedure. We show several examples of meshes generated by our approach, and present a thorough experimental evaluation of the quality of the meshes given as examples.
Resumo:
The study of pharmacokinetic properties (PK) is of great importance in drug discovery and development. In the present work, PK/DB (a new freely available database for PK) was designed with the aim of creating robust databases for pharmacokinetic studies and in silico absorption, distribution, metabolism and excretion (ADME) prediction. Comprehensive, web-based and easy to access, PK/DB manages 1203 compounds which represent 2973 pharmacokinetic measurements, including five models for in silico ADME prediction (human intestinal absorption, human oral bioavailability, plasma protein binding, bloodbrain barrier and water solubility).
Resumo:
In this paper we present a new wavelet-based algorithm for low-cost computation of the cepstrum. It can be used for real time precise pitch determination in automatic speech and speaker recognition systems. Many wavelet families are examined to determine the one that works best. The results confirm the efficacy and accuracy of the proposed technique for pitch extraction. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Advanced Building Energy Data Visualization is a way to detect performance problems in commercialbuildings. By placing sensors in a building that collects data from example, air temperature and electricalpower, then makes it possible to calculate the data in Data Visualization software. This softwaregenerates visual diagrams so the building manager or building operator can see if for example thepower consumption is to high.A first step (before sensors are installed in a building) to see how the energy consumption is in abuilding can be to use a Benchmarking Tool. There is a number of Benchmarking Tools that is availablefor free on the Internet. Each tool have a bit different approach, but they all show how much energyconsumption there is in a building compared to other similar buildings.In this study a new web design for the benchmarking tool CalARCH has been developed. CalARCHis developed at the Berkeley Lab in Berkeley, California, USA. CalARCH uses data collected only frombuildings in California, and is only for comparing buildings in California with other similar buildingsin the state.Five different versions of the web site were made. Then a web survey was done to determine whichversion would be the best for CalARCH. The results showed that Version 5 and Version 3 was the best.Then a new version was made, based on these two versions. This study was made at the LawrenceBerkeley Laboratory.
Resumo:
This research is based on consumer complaints with respect to recently purchased consumer electronics. This research document will investigate the instances of development and device management as a tool used to aid consumer and manage consumer’s mobile products in order to resolve issues in or before the consumers is aware one exists. The problem at the present time is that mobile devices are becoming very advanced pieces of technology, and not all manufacturers and network providers have kept up the support element of End users. As such, the subject of the research is to investigate how device management could possibly be used as a method to promote research and development of mobile devices, and provide a better experience for the consumer. The wireless world is becoming increasingly complex as revenue opportunities are driven by new and innovative data services. We can no longer expect the customer to have the knowledge or ability to configure their own device. Device Management platforms can address the challenges of device configuration and support through new enabling technologies. Leveraging these technologies will allow a network operator to reduce the cost of subscriber ownership, drive increased ARPU (Average Revenue per User) by removing barriers to adoption, reduce churn by improving the customer experience and increase customer loyalty. DM technologies provide a flexible and powerful management method but are managing the same device features that have historically been configured manually through call centers or by the end user making changes directly on the device. For this reason DM technologies must be treated as part of a wider support solution. The traditional requirement for discovery, fault finding, troubleshooting and diagnosis are still as relevant with DM as they are in the current human support environment yet the current generation of solutions do little to address this problem. In the deployment of an effective Device Management solution the network operator must consider the integration of the DM platform, interfacing with many areas of the business, supported by knowledge of the relationship between devices, applications, solutions and services maintained on an ongoing basis. Complementing the DM solution with published device information, setup guides, training material and web based tools will ensure the quality of the customer experience, ensuring that problems are completely resolved, driving data usage by focusing customer education on the use of the wireless service In this way device management becomes a tool used both internally within the network or device vendor and by the customer themselves, with each user empowered to effectively manage the device without any prior knowledge or experience, confident that changes they apply will be relevant, accurate, stable and compatible. The value offered by an effective DM solution with an expert knowledge service will become a significant differentiator for the network operator in an ever competitive wireless market. This research document is intended to highlight some of the issues the industry faces as device management technologies become more prevalent, and offers some potential solutions to simplify the increasingly complex task of managing devices on the network, where device management can be used as a tool to aid customer relations and manage customer’s mobile products in order to resolve issues before the user is aware one exists. The research is broken down into the following, Customer Relationship Management, Device management, the role of knowledge with the DM, Companies that have successfully implemented device management, and the future of device management and CRM. And it also consists of questionnaires aimed at technical support agents and mobile device users. Interview was carried out with CRM managers within support centre to further the evidence gathered. To conclude, the document is to consider the advantages and disadvantages of device management and attempt to determine the influence it will have over customer support centre, and what methods could be used to implement it.
Resumo:
Wikipedia is a free, web-based, collaborative, multilingual encyclopedia project supported by the non-profit Wikimedia Foundation. Due to the free nature of Wikipedia and allowing open access to everyone to edit articles the quality of articles may be affected. As all people don’t have equal level of knowledge and also different people have different opinions about a topic so there may be difference between the contributions made by different authors. To overcome this situation it is very important to classify the articles so that the articles of good quality can be separated from the poor quality articles and should be removed from the database. The aim of this study is to classify the articles of Wikipedia into two classes class 0 (poor quality) and class 1(good quality) using the Adaptive Neuro Fuzzy Inference System (ANFIS) and data mining techniques. Two ANFIS are built using the Fuzzy Logic Toolbox [1] available in Matlab. The first ANFIS is based on the rules obtained from J48 classifier in WEKA while the other one was built by using the expert’s knowledge. The data used for this research work contains 226 article’s records taken from the German version of Wikipedia. The dataset consists of 19 inputs and one output. The data was preprocessed to remove any similar attributes. The input variables are related to the editors, contributors, length of articles and the lifecycle of articles. In the end analysis of different methods implemented in this research is made to analyze the performance of each classification method used.