932 resultados para Predictive controllers


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Government initiatives in several developed and developing countries to roll-out smart meters call for research on the sustainability impacts of these devices. In principle smart meters bring about higher control over energy theft and lower consumption, but require a high level of engagement by end-users. An alternative consists of load controllers, which control the load according to pre-set parameters. To date, research has focused on the impacts of these two alternatives separately. This study compares the sustainability impacts of smart meters and load controllers in an occupied office building in Italy. The assessment is carried out on three different floors of the same building. Findings show that demand reductions associated with a smart meter device are 5.2% higher than demand reductions associated with the load controller.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is strong evidence that neonates imitate previously unseen behaviors. These behaviors are predominantly used in social interactions, demonstrating neonates’ ability and motivation to engage with others. Research on neonatal imitation can provide a wealth of information about the early mirror neuron system (MNS): namely, its functional characteristics, its plasticity from birth, and its relation to skills later in development. Though numerous studies document the existence of neonatal imitation in the laboratory, little is known about its natural occurrence during parent-infant interactions and its plasticity as a consequence of experience. We review these critical aspects of imitation, which we argue are necessary for understanding the early action-perception system. We address common criticisms and misunderstandings about neonatal imitation and discuss methodological differences among studies. Recent work reveals that individual differences in neonatal imitation positively correlate with later social, cognitive, and motor development. We propose that such variation in neonatal imitation could reflect important individual differences of the MNS. Although postnatal experience is not necessary for imitation, we present evidence that neonatal imitation is influenced by experience in the first week of life.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the UK, architectural design is regulated through a system of design control for the public interest, which aims to secure and promote ‘quality’ in the built environment. Design control is primarily implemented by locally employed planning professionals with political oversight, and independent design review panels, staffed predominantly by design professionals. Design control has a lengthy and complex history, with the concept of ‘design’ offering a range of challenges for a regulatory system of governance. A simultaneously creative and emotive discipline, architectural design is a difficult issue to regulate objectively or consistently, often leading to policy that is regarded highly discretionary and flexible. This makes regulatory outcomes difficult to predict, as approaches undertaken by the ‘agents of control’ can vary according to the individual. The role of the design controller is therefore central, tasked with the responsibility of interpreting design policy and guidance, appraising design quality and passing professional judgment. However, little is really known about what influences the way design controllers approach their task, providing a ‘veil’ over design control, shrouding the basis of their decisions. This research engaged directly with the attitudes and perceptions of design controllers in the UK, lifting this ‘veil’. Using in-depth interviews and Q-Methodology, the thesis explores this hidden element of control, revealing a number of key differences in how controllers approach and implement policy and guidance, conceptualise design quality, and rationalise their evaluations and judgments. The research develops a conceptual framework for agency in design control – this consists of six variables (Regulation; Discretion; Skills; Design Quality; Aesthetics; and Evaluation) and it is suggested that this could act as a ‘heuristic’ instrument for UK controllers, prompting more reflexivity in relation to evaluating their own position, approaches, and attitudes, leading to better practice and increased transparency of control decisions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our digital universe is rapidly expanding,more and more daily activities are digitally recorded, data arrives in streams, it needs to be analyzed in real time and may evolve over time. In the last decade many adaptive learning algorithms and prediction systems, which can automatically update themselves with the new incoming data, have been developed. The majority of those algorithms focus on improving the predictive performance and assume that model update is always desired as soon as possible and as frequently as possible. In this study we consider potential model update as an investment decision, which, as in the financial markets, should be taken only if a certain return on investment is expected. We introduce and motivate a new research problem for data streams ? cost-sensitive adaptation. We propose a reference framework for analyzing adaptation strategies in terms of costs and benefits. Our framework allows to characterize and decompose the costs of model updates, and to asses and interpret the gains in performance due to model adaptation for a given learning algorithm on a given prediction task. Our proof-of-concept experiment demonstrates how the framework can aid in analyzing and managing adaptation decisions in the chemical industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper employs a probit and a Markov switching model using information from the Conference Board Leading Indicator and other predictor variables to forecast the signs of future rental growth in four key U.S. commercial rent series. We find that both approaches have considerable power to predict changes in the direction of commercial rents up to two years ahead, exhibiting strong improvements over a naïve model, especially for the warehouse and apartment sectors. We find that while the Markov switching model appears to be more successful, it lags behind actual turnarounds in market outcomes whereas the probit is able to detect whether rental growth will be positive or negative several quarters ahead.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Species distribution models (SDM) are increasingly used to understand the factors that regulate variation in biodiversity patterns and to help plan conservation strategies. However, these models are rarely validated with independently collected data and it is unclear whether SDM performance is maintained across distinct habitats and for species with different functional traits. Highly mobile species, such as bees, can be particularly challenging to model. Here, we use independent sets of occurrence data collected systematically in several agricultural habitats to test how the predictive performance of SDMs for wild bee species depends on species traits, habitat type, and sampling technique. We used a species distribution modeling approach parametrized for the Netherlands, with presence records from 1990 to 2010 for 193 Dutch wild bees. For each species, we built a Maxent model based on 13 climate and landscape variables. We tested the predictive performance of the SDMs with independent datasets collected from orchards and arable fields across the Netherlands from 2010 to 2013, using transect surveys or pan traps. Model predictive performance depended on species traits and habitat type. Occurrence of bee species specialized in habitat and diet was better predicted than generalist bees. Predictions of habitat suitability were also more precise for habitats that are temporally more stable (orchards) than for habitats that suffer regular alterations (arable), particularly for small, solitary bees. As a conservation tool, SDMs are best suited to modeling rarer, specialist species than more generalist and will work best in long-term stable habitats. The variability of complex, short-term habitats is difficult to capture in such models and historical land use generally has low thematic resolution. To improve SDMs’ usefulness, models require explanatory variables and collection data that include detailed landscape characteristics, for example, variability of crops and flower availability. Additionally, testing SDMs with field surveys should involve multiple collection techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Regional information on climate change is urgently needed but often deemed unreliable. To achieve credible regional climate projections, it is essential to understand underlying physical processes, reduce model biases and evaluate their impact on projections, and adequately account for internal variability. In the tropics, where atmospheric internal variability is small compared with the forced change, advancing our understanding of the coupling between long-term changes in upper-ocean temperature and the atmospheric circulation will help most to narrow the uncertainty. In the extratropics, relatively large internal variability introduces substantial uncertainty, while exacerbating risks associated with extreme events. Large ensemble simulations are essential to estimate the probabilistic distribution of climate change on regional scales. Regional models inherit atmospheric circulation uncertainty from global models and do not automatically solve the problem of regional climate change. We conclude that the current priority is to understand and reduce uncertainties on scales greater than 100 km to aid assessments at finer scales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To evaluate risk factors for recurrence of carcinoma of the uterine cervix among women who had undergone radical hysterectomy without pelvic lymph node metastasis, while taking into consideration not only the classical histopathological factors but also sociodemographic, clinical and treatment-related factors. Study design: This was an exploratory analysis on 233 women with carcinoma of the uterine cervix (stages IB and IIA) who were treated by means of radical hysterectomy and pelvic lymphadenectomy, with free surgical margins and without lymph node metastases on conventional histopathological examination. Women with histologically normal lymph nodes but with micrometastases in the immunohistochemical analysis (AE1/AE3) were excluded. Disease-free survival for sociodemographic, clinical and histopathological variables was calculated using the Kaplan-Meier method. The Cox proportional hazards model was used to identify the independent risk factors for recurrence. Results: Twenty-seven recurrences were recorded (11.6%), of which 18 were pelvic, four were distant, four were pelvic + distant and one was of unknown location. The five-year disease-free survival rate among the study population was 88.4%. The independent risk factors for recurrence in the multivariate analysis were: postmenopausal status (HR 14.1; 95% CI: 3.7-53.6; P < 0.001), absence of or slight inflammatory reaction (HR 7.9; 95% CI: 1.7-36.5; P = 0.008) and invasion of the deepest third of the cervix (FIR 6.1; 95% CI: 1.3-29.1; P = 0.021). Postoperative radiotherapy was identified as a protective factor against recurrence (HR 0.02; 95% CI: 0.001-0.25; P = 0.003). Conclusion: Postmenopausal status is a possible independent risk factor for recurrence even when adjusted for classical prognostic factors (such as tumour size, depth of turnout invasion, capillary embolisation) and treatment-related factors (period of treatment and postoperative radiotherapy status). (C) 2009 Elsevier Ireland Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 2004 the National Household Survey (Pesquisa Nacional par Amostras de Domicilios - PNAD) estimated the prevalence of food and nutrition insecurity in Brazil. However, PNAD data cannot be disaggregated at the municipal level. The objective of this study was to build a statistical model to predict severe food insecurity for Brazilian municipalities based on the PNAD dataset. Exclusion criteria were: incomplete food security data (19.30%); informants younger than 18 years old (0.07%); collective households (0.05%); households headed by indigenous persons (0.19%). The modeling was carried out in three stages, beginning with the selection of variables related to food insecurity using univariate logistic regression. The variables chosen to construct the municipal estimates were selected from those included in PNAD as well as the 2000 Census. Multivariate logistic regression was then initiated, removing the non-significant variables with odds ratios adjusted by multiple logistic regression. The Wald Test was applied to check the significance of the coefficients in the logistic equation. The final model included the variables: per capita income; years of schooling; race and gender of the household head; urban or rural residence; access to public water supply; presence of children; total number of household inhabitants and state of residence. The adequacy of the model was tested using the Hosmer-Lemeshow test (p=0.561) and ROC curve (area=0.823). Tests indicated that the model has strong predictive power and can be used to determine household food insecurity in Brazilian municipalities, suggesting that similar predictive models may be useful tools in other Latin American countries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of pharmacokinetic properties (PK) is of great importance in drug discovery and development. In the present work, PK/DB (a new freely available database for PK) was designed with the aim of creating robust databases for pharmacokinetic studies and in silico absorption, distribution, metabolism and excretion (ADME) prediction. Comprehensive, web-based and easy to access, PK/DB manages 1203 compounds which represent 2973 pharmacokinetic measurements, including five models for in silico ADME prediction (human intestinal absorption, human oral bioavailability, plasma protein binding, bloodbrain barrier and water solubility).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Canalizing genes possess such broad regulatory power, and their action sweeps across a such a wide swath of processes that the full set of affected genes are not highly correlated under normal conditions. When not active, the controlling gene will not be predictable to any significant degree by its subject genes, either alone or in groups, since their behavior will be highly varied relative to the inactive controlling gene. When the controlling gene is active, its behavior is not well predicted by any one of its targets, but can be very well predicted by groups of genes under its control. To investigate this question, we introduce in this paper the concept of intrinsically multivariate predictive (IMP) genes, and present a mathematical study of IMP in the context of binary genes with respect to the coefficient of determination (CoD), which measures the predictive power of a set of genes with respect to a target gene. A set of predictor genes is said to be IMP for a target gene if all properly contained subsets of the predictor set are bad predictors of the target but the full predictor set predicts the target with great accuracy. We show that logic of prediction, predictive power, covariance between predictors, and the entropy of the joint probability distribution of the predictors jointly affect the appearance of IMP genes. In particular, we show that high-predictive power, small covariance among predictors, a large entropy of the joint probability distribution of predictors, and certain logics, such as XOR in the 2-predictor case, are factors that favor the appearance of IMP. The IMP concept is applied to characterize the behavior of the gene DUSP1, which exhibits control over a central, process-integrating signaling pathway, thereby providing preliminary evidence that IMP can be used as a criterion for discovery of canalizing genes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.