918 resultados para Trial and error


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uno de los procesos de desarrollo más comunes para llevar a cabo un proyecto arquitectónico es el ensayo y error. Un proceso de selección de pruebas que se suele abordar de dos maneras, o bien se efectúa con el fin de ir depurando una posición más óptima, o bien sirve para explorar nuevas vías de investigación. Con el fin de profundizar en esto, el artículo presenta el análisis de dos diferentes procesos de proyecto de viviendas desarrolladas por ensayo y error, obras referenciales en la historia de la arquitectura, la Villa Stonborough de Wittgenstein y la Villa Moller de Adolf Loos. Ambas aunque pertenecientes al mismo periodo histórico, están desarrolladas de maneras muy opuestas, casi enfrentadas. De su estudio se pretende localizar los conceptos que han impulsado sus diferentes vías de producción, para poder extrapolados a otros casos similares. ABSTRACT: One of the most common processes to develop an architectonic project is the trial and error method. The process of selection of tests is usually done on two different ways. Or it is done with the goal to find out the most optimized position, or it is used to explore new ways of research. In order to investigate this item, the article shows the analysis of two different processes of housing projects that have been done by trial and error. Constructions, that are references in the history of architecture, the Villa Stonborough by Wittgenstein and the Villa Moller by Adolf Loos. Although both of them belong to the same historical period, they are developed by different ways, almost confronted. Thanks to this analysis we will attempt to localize the concepts that drove into their different way of production and then we will try to extrapolate these properties to other similar cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vicarious trial-and-error (VTE) is a term that Muenzinger and Tolman used to describe the rat's conflict-like behavior before responding to choice. Recently, VTE was proposed as a mechanism alternative to the concept of "cognitive map" in accounts of hippocampal function. That is, many phenomena of impaired learning and memory related to hippocampal interventions may be explained by behavioral first principles: reduced conflicting, incipient, pre-choice tendencies to approach and avoid. The nonspatial black-white discrimination learning and VTE behavior of the rat were investigated. Hippocampal-lesioned and sham-lesioned animals were trained for 25 days (20 trials per day) starting at 60 days of age. Each movement of the head from one discriminative stimulus to the other was counted as a VTE instance. Lesioned rats had fewer VTEs than sham controls, and the former learned much more slowly or never learned. After learning, VTE frequency declined. Male and female rats showed no significant differences in VTE behavior or discrimination learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods: In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-eff ectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings: 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months’ follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0∙58, 95% CI 0∙38–0∙89); a β blocker if they had asthma (0∙73, 0∙58–0∙91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0∙51, 0∙34–0∙78). PINCER has a 95% probability of being cost eff ective if the decision-maker’s ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation: The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding: Patient Safety Research Portfolio, Department of Health, England.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When combined at particular molar fractions, sugars, aminoacids or organic acids a present a high melting point depression, becoming liquids at room temperature. These are called Natural Deep Eutectic Solvents – NADES and are envisaged to play a major role on the chemical engineering processes of the future. Nonetheless, there is a significant lack of knowledge of its fundamental and basic properties, which is hindering their industrial applications. For this reason it is important to extend the knowledge on these systems, boosting their application development [1]. In this work, we have developed and characterized NADES based on choline chloride, organic acids, amino acids and sugars. Their density, thermal behavior, conductivity and polarity were assessed for different compositions. The conductivity was measured from 0 to 40 °C and the temperature effect was well described by the Vogel-Fulcher-Tammann equation. The morphological characterization of the crystallizable materials was done by polarized optical microscopy that provided also evidence of homogeneity/phase separation. Additionally, the rheological and thermodynamic properties of the NADES and the effect of water content were also studied. The results show these systems have Newtonian behavior and present significant viscosity decrease with temperature and water content, due to increase on the molecular mobility. The anhydrous systems present viscosities that range from higher than 1000Pa.s at 20°C to less than 1Pa.s at 70°C. DSC characterization confirms that for water content as high as 1:1:1 molar ratio, the mixture retains its single phase behavior. The results obtained demonstrate that the NADES properties can be finely tunned by careful selection of its constituents. NADES present the necessary properties for use as extraction solvents. They can be prepared from inexpensive raw materials and tailored for the selective extraction of target molecules. The data produced in this work is hereafter importance for the selection of the most promising candidates avoiding a time consuming and expensive trial and error phase providing also data for the development of models able to predict their properties and the mechanisms that allow the formation of the deep eutectic mixtures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Otto-von-Guericke-Universität Magdeburg, Fakultät für Mathematik, Univ., Dissertation, 2015

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The capacity to learn to associate sensory perceptions with appropriate motor actions underlies the success of many animal species, from insects to humans. The evolutionary significance of learning has long been a subject of interest for evolutionary biologists who emphasize the bene¬fit yielded by learning under changing environmental conditions, where it is required to flexibly switch from one behavior to another. However, two unsolved questions are particularly impor¬tant for improving our knowledge of the evolutionary advantages provided by learning, and are addressed in the present work. First, because it is possible to learn the wrong behavior when a task is too complex, the learning rules and their underlying psychological characteristics that generate truly adaptive behavior must be identified with greater precision, and must be linked to the specific ecological problems faced by each species. A framework for predicting behavior from the definition of a learning rule is developed here. Learning rules capture cognitive features such as the tendency to explore, or the ability to infer rewards associated to unchosen actions. It is shown that these features interact in a non-intuitive way to generate adaptive behavior in social interactions where individuals affect each other's fitness. Such behavioral predictions are used in an evolutionary model to demonstrate that, surprisingly, simple trial-and-error learn¬ing is not always outcompeted by more computationally demanding inference-based learning, when population members interact in pairwise social interactions. A second question in the evolution of learning is its link with and relative advantage compared to other simpler forms of phenotypic plasticity. After providing a conceptual clarification on the distinction between genetically determined vs. learned responses to environmental stimuli, a new factor in the evo¬lution of learning is proposed: environmental complexity. A simple mathematical model shows that a measure of environmental complexity, the number of possible stimuli in one's environ¬ment, is critical for the evolution of learning. In conclusion, this work opens roads for modeling interactions between evolving species and their environment in order to predict how natural se¬lection shapes animals' cognitive abilities. - La capacité d'apprendre à associer des sensations perceptives à des actions motrices appropriées est sous-jacente au succès évolutif de nombreuses espèces, depuis les insectes jusqu'aux êtres hu¬mains. L'importance évolutive de l'apprentissage est depuis longtemps un sujet d'intérêt pour les biologistes de l'évolution, et ces derniers mettent l'accent sur le bénéfice de l'apprentissage lorsque les conditions environnementales sont changeantes, car dans ce cas il est nécessaire de passer de manière flexible d'un comportement à l'autre. Cependant, deux questions non résolues sont importantes afin d'améliorer notre savoir quant aux avantages évolutifs procurés par l'apprentissage. Premièrement, puisqu'il est possible d'apprendre un comportement incorrect quand une tâche est trop complexe, les règles d'apprentissage qui permettent d'atteindre un com¬portement réellement adaptatif doivent être identifiées avec une plus grande précision, et doivent être mises en relation avec les problèmes écologiques spécifiques rencontrés par chaque espèce. Un cadre théorique ayant pour but de prédire le comportement à partir de la définition d'une règle d'apprentissage est développé ici. Il est démontré que les caractéristiques cognitives, telles que la tendance à explorer ou la capacité d'inférer les récompenses liées à des actions non ex¬périmentées, interagissent de manière non-intuitive dans les interactions sociales pour produire des comportements adaptatifs. Ces prédictions comportementales sont utilisées dans un modèle évolutif afin de démontrer que, de manière surprenante, l'apprentissage simple par essai-et-erreur n'est pas toujours battu par l'apprentissage basé sur l'inférence qui est pourtant plus exigeant en puissance de calcul, lorsque les membres d'une population interagissent socialement par pair. Une deuxième question quant à l'évolution de l'apprentissage concerne son lien et son avantage relatif vis-à-vis d'autres formes plus simples de plasticité phénotypique. Après avoir clarifié la distinction entre réponses aux stimuli génétiquement déterminées ou apprises, un nouveau fac¬teur favorisant l'évolution de l'apprentissage est proposé : la complexité environnementale. Un modèle mathématique permet de montrer qu'une mesure de la complexité environnementale - le nombre de stimuli rencontrés dans l'environnement - a un rôle fondamental pour l'évolution de l'apprentissage. En conclusion, ce travail ouvre de nombreuses perspectives quant à la mo¬délisation des interactions entre les espèces en évolution et leur environnement, dans le but de comprendre comment la sélection naturelle façonne les capacités cognitives des animaux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology (i.e. tools, methods of cultivation and domestication, systems of construction and appropriation, machines) has increased the vital rates of humans, and is one of the defining features of the transition from Malthusian ecological stagnation to a potentially perpetual rising population growth. Maladaptations, on the other hand, encompass behaviours, customs and practices that decrease the vital rates of individuals. Technology and maladaptations are part of the total stock of culture carried by the individuals in a population. Here, we develop a quantitative model for the coevolution of cumulative adaptive technology and maladaptive culture in a 'producer-scrounger' game, which can also usefully be interpreted as an 'individual-social' learner interaction. Producers (individual learners) are assumed to invent new adaptations and maladaptations by trial-and-error learning, insight or deduction, and they pay the cost of innovation. Scroungers (social learners) are assumed to copy or imitate (cultural transmission) both the adaptations and maladaptations generated by producers. We show that the coevolutionary dynamics of producers and scroungers in the presence of cultural transmission can have a variety of effects on population carrying capacity. From stable polymorphism, where scroungers bring an advantage to the population (increase in carrying capacity), to periodic cycling, where scroungers decrease carrying capacity, we find that selection-driven cultural innovation and transmission may send a population on the path of indefinite growth or to extinction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this report we present the growth process of the cobalt oxide system using reactive electron beam deposition. In that technique, a target of metallic cobalt is evaporated and its atoms are in-flight oxidized in an oxygen rich reactive atmosphere before reaching the surface of the substrate. With a trial and error procedure the deposition parameters have been optimized to obtain the correct stoichiometry and crystalline phase. The evaporation conditions to achieve the correct cobalt oxide salt rock structure, when evaporating over amorphous silicon nitride, are: 525 K of substrate temperature, 2.5·10-4 mbar of oxygen partial pressure and 1 Å/s of evaporation rate. Once the parameters were optimized a set of ultra thin film ranging from samples of 1 nm of nominal thickness to 20nm thick and bulk samples were grown. With the aim to characterize the samples and study their microstructure and morphology, X-ray diffraction, transmission electron microscopy, electron diffraction, energy dispersive X-ray spectroscopy and quasi-adiabatic nanocalorimetry techniques are utilised. The final results show a size dependent effect of the antiferromagnetic transition. Its Néel temperature becomes depressed as the size of the grains forming the layer decreases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Individual learning (e.g., trial-and-error) and social learning (e.g., imitation) are alternative ways of acquiring and expressing the appropriate phenotype in an environment. The optimal choice between using individual learning and/or social learning may be dictated by the life-stage or age of an organism. Of special interest is a learning schedule in which social learning precedes individual learning, because such a schedule is apparently a necessary condition for cumulative culture. Assuming two obligatory learning stages per discrete generation, we obtain the evolutionarily stable learning schedules for the three situations where the environment is constant, fluctuates between generations, or fluctuates within generations. During each learning stage, we assume that an organism may target the optimal phenotype in the current environment by individual learning, and/or the mature phenotype of the previous generation by oblique social learning. In the absence of exogenous costs to learning, the evolutionarily stable learning schedules are predicted to be either pure social learning followed by pure individual learning ("bang-bang" control) or pure individual learning at both stages ("flat" control). Moreover, we find for each situation that the evolutionarily stable learning schedule is also the one that optimizes the learned phenotype at equilibrium.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study compared the outcome of total knee replacement (TKR) in adult patients with fixed- and mobile-bearing prostheses during the first post-operative year and at five years' follow-up, using gait parameters as a new objective measure. This double-blind randomised controlled clinical trial included 55 patients with mobile-bearing (n = 26) and fixed-bearing (n = 29) prostheses of the same design, evaluated pre-operatively and post-operatively at six weeks, three months, six months, one year and five years. Each participant undertook two walking trials of 30 m and completed the EuroQol questionnaire, Western Ontario and McMaster Universities osteoarthritis index, Knee Society score, and visual analogue scales for pain and stiffness. Gait analysis was performed using five miniature angular rate sensors mounted on the trunk (sacrum), each thigh and calf. The study population was divided into two groups according to age (≤ 70 years versus > 70 years). Improvements in most gait parameters at five years' follow-up were greater for fixed-bearing TKRs in older patients (> 70 years), and greater for mobile-bearing TKRs in younger patients (≤ 70 years). These findings should be confirmed by an extended age controlled study, as the ideal choice of prosthesis might depend on the age of the patient at the time of surgery.