64 resultados para Computation by Abstract Devices
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Helping behavior is any intentional behavior that benefits another living being or group (Hogg & Vaughan, 2010). People tend to underestimate the probability that others will comply with their direct requests for help (Flynn & Lake, 2008). This implies that when they need help, they will assess the probability of getting it (De Paulo, 1982, cited in Flynn & Lake, 2008) and then they will tend to estimate one that is actually lower than the real chance, so they may not even consider worth asking for it. Existing explanations for this phenomenon attribute it to a mistaken cost computation by the help seeker, who will emphasize the instrumental cost of “saying yes”, ignoring that the potential helper also needs to take into account the social cost of saying “no”. And the truth is that, especially in face-to-face interactions, the discomfort caused by refusing to help can be very high. In short, help seekers tend to fail to realize that it might be more costly to refuse to comply with a help request rather than accepting. A similar effect has been observed when estimating trustworthiness of people. Fetchenhauer and Dunning (2010) showed that people also tend to underestimate it. This bias is reduced when, instead of asymmetric feedback (getting feedback only when deciding to trust the other person), symmetric feedback (always given) was provided. This cause could as well be applicable to help seeking as people only receive feedback when they actually make their request but not otherwise. Fazio, Shook, and Eiser (2004) studied something that could be reinforcing these outcomes: Learning asymmetries. By means of a computer game called BeanFest, they showed that people learn better about negatively valenced objects (beans in this case) than about positively valenced ones. This learning asymmetry esteemed from “information gain being contingent on approach behavior” (p. 293), which could be identified with what Fetchenhauer and Dunning mention as ‘asymmetric feedback’, and hence also with help requests. Fazio et al. also found a generalization asymmetry in favor of negative attitudes versus positive ones. They attributed it to a negativity bias that “weights resemblance to a known negative more heavily than resemblance to a positive” (p. 300). Applied to help seeking scenarios, this would mean that when facing an unknown situation, people would tend to generalize and infer that is more likely that they get a negative rather than a positive outcome from it, so, along with what it was said before, people will be more inclined to think that they will get a “no” when requesting help. Denrell and Le Mens (2011) present a different perspective when trying to explain judgment biases in general. They deviate from the classical inappropriate information processing (depicted among other by Fiske & Taylor, 2007, and Tversky & Kahneman, 1974) and explain this in terms of ‘adaptive sampling’. Adaptive sampling is a sampling mechanism in which the selection of sample items is conditioned by the values of the variable of interest previously observed (Thompson, 2011). Sampling adaptively allows individuals to safeguard themselves from experiences they went through once and turned out to lay negative outcomes. However, it also prevents them from giving a second chance to those experiences to get an updated outcome that could maybe turn into a positive one, a more positive one, or just one that regresses to the mean, whatever direction that implies. That, as Denrell and Le Mens (2011) explained, makes sense: If you go to a restaurant, and you did not like the food, you do not choose that restaurant again. This is what we think could be happening when asking for help: When we get a “no”, we stop asking. And here, we want to provide a complementary explanation for the underestimation of the probability that others comply with our direct help requests based on adaptive sampling. First, we will develop and explain a model that represents the theory. Later on, we will test it empirically by means of experiments, and will elaborate on the analysis of its results.
Resumo:
Actualment, la resposta de la majoria d’instrumentació operacional i dels dosímetres personals utilitzats en radioprotecció per a la dosimetria neutrònica és altament dependent de l’energia dels espectres neutrònics a analitzar, especialment amb camps neutrònics amb una important component intermitja. En conseqüència, la interpretació de les lectures d’aquests aparells es complicada si no es té un coneixement previ de la distribució espectral de la fluència neutrònica en els punts d’interès. El Grup de Física de les Radiacions de la Universitat Autònoma de Barcelona (GFR-UAB) ha desenvolupat en els últims anys un espectròmetre de neutrons basat en un Sistema d’Esferes Bonner (BSS) amb un contador proporcional d’3He com a detector actiu. Els principals avantatges dels espectròmetres de neutrons per BSS són: la seva resposta isotròpica, la possibilitat de discriminar la component neutrònica de la gamma en camps mixtos, i la seva alta sensibilitat neutrònica als nivells de dosi analitzats. Amb aquestes característiques, els espectròmetres neutrònics per BSS compleixen amb els estándards de les últimes recomanacions de la ICRP i poden ser utilitzats també en el camp de la dosimetria neutrònica per a la mesura de dosis en el rang d’energia que va dels tèrmics fins als 20 MeV, en nou ordres de magnitud. En el marc de la col•laboració entre el GFR - UAB i el Laboratorio Nazionale di Frascati – Istituto Nazionale di Fisica Nucleare (LNF-INFN), ha tingut lloc una experiència comparativa d’espectrometria per BSS amb els feixos quasi monoenergètics de 2.5 MeV i 14 MeV del Fast Neutron Generator de l’ENEA. En l’exercici s’ha determinat l’espectre neutrònic a diferents distàncies del blanc de l’accelerador, aprofitant el codi FRUIT recentment desenvolupat pel grup LNF. Els resultats obtinguts mostren una bona coherència entre els dos espectròmetres i les dades mesurades i simulades.
Resumo:
We characterize the set of Walrasian allocations of an economy as theset of allocations which can be supported by abstract equilibria that satisfy a recontracting condition which reflects the idea that agents can freely trade with each other. An alternative (and weaker) recontracting condition characterizesthe core. The results are extended to production economies by extending thedefinition of the recontracting condition to include the possibility of agentsto recontract with firms. However, no optimization requirement is imposed onfirms. In pure exchange economies, an abstract equilibrium is a feasible allocation and a list of choice sets, one for each agent, that satisfy thefollowing conditions: an agent's choice set is a subset of the commodity space that includes his endowment; and each agent's equilibrium bundle isa maximal element in his choice set, with respect to his preferences. Therecontracting condition requires that any agent can buy bundles from any other agent's choice set by offering the other agent a bundle he prefers tohis equilibrium bundle.
Resumo:
The interest in solar ultraviolet (UV) radiation from the scientific community and the general population has risen significantly in recent years because of the link between increased UV levels at the Earth's surface and depletion of ozone in the stratosphere. As a consequence of recent research, UV radiation climatologies have been developed, and effects of some atmospheric constituents (such as ozone or aerosols) have been studied broadly. Correspondingly, there are well-established relationships between, for example, total ozone column and UV radiation levels at the Earth's surface. Effects of clouds, however, are not so well described, given the intrinsic difficulties in properly describing cloud characteristics. Nevertheless, the effect of clouds cannot be neglected, and the variability that clouds induce on UV radiation is particularly significant when short timescales are involved. In this review we show, summarize, and compare several works that deal with the effect of clouds on UV radiation. Specifically, works reviewed here approach the issue from the empirical point of view: Some relationship between measured UV radiation in cloudy conditions and cloud-related information is given in each work. Basically, there are two groups of methods: techniques that are based on observations of cloudiness (either from human observers or by using devices such as sky cameras) and techniques that use measurements of broadband solar radiation as a surrogate for cloud observations. Some techniques combine both types of information. Comparison of results from different works is addressed through using the cloud modification factor (CMF) defined as the ratio between measured UV radiation in a cloudy sky and calculated radiation for a cloudless sky. Typical CMF values for overcast skies range from 0.3 to 0.7, depending both on cloud type and characteristics. Despite this large dispersion of values corresponding to the same cloud cover, it is clear that the cloud effect on UV radiation is 15–45% lower than the cloud effect on total solar radiation. The cloud effect is usually a reducing effect, but a significant number of works report an enhancement effect (that is increased UV radiation levels at the surface) due to the presence of clouds. The review concludes with some recommendations for future studies aimed to further analyze the cloud effects on UV radiation
Resumo:
El contingut d'aquest projecte, consisteix en desenvolupar una solució per a dispositius mòbils Android amb NFC, amb la finalitat de motivar i incrementar el nombre de ciutadans que practiquen esport a la ciutat de Barcelona. El projecte inclou disseny de la interfície mòbil, la creació d'una base de dades, un servei d'API Rest per accedir a les dades des dels dispositius, l'ús del servei de mapes de Google i diferents llibreries i eines per a dissenyar l'arquitectura del sistema. Es tindran en compte les restriccions en disseny i funcionalitat per part del client (ajuntament).
Resumo:
Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers) used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI) estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI) procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.
Resumo:
We analyze the behavior of complex information in the Fresnel domain, taking into account the limited capability to display complex values of liquid crystal devices when they are used as holographic displays. To do this analysis we study the reconstruction of Fresnel holograms at several distances using the different parts of the complex distribution. We also use the information adjusted with a method that combines two configurations of the devices in an adding architecture. The results of the error analysis show different behavior for the reconstructions when using the different methods. Simulated and experimental results are presented.
Resumo:
Current technology trends in medical device industry calls for fabrication of massive arrays of microfeatures such as microchannels on to nonsilicon material substrates with high accuracy, superior precision, and high throughput. Microchannels are typical features used in medical devices for medication dosing into the human body, analyzing DNA arrays or cell cultures. In this study, the capabilities of machining systems for micro-end milling have been evaluated by conducting experiments, regression modeling, and response surface methodology. In machining experiments by using micromilling, arrays of microchannels are fabricated on aluminium and titanium plates, and the feature size and accuracy (width and depth) and surface roughness are measured. Multicriteria decision making for material and process parameters selection for desired accuracy is investigated by using particle swarm optimization (PSO) method, which is an evolutionary computation method inspired by genetic algorithms (GA). Appropriate regression models are utilized within the PSO and optimum selection of micromilling parameters; microchannel feature accuracy and surface roughness are performed. An analysis for optimal micromachining parameters in decision variable space is also conducted. This study demonstrates the advantages of evolutionary computing algorithms in micromilling decision making and process optimization investigations and can be expanded to other applications
Resumo:
"Vegeu el resum a l'inici del documents del fitxer adjunt"
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
"Vegeu el resum a l'inici del document del fitxer ajunt."
Resumo:
In this paper, we develop numerical algorithms that use small requirements of storage and operations for the computation of invariant tori in Hamiltonian systems (exact symplectic maps and Hamiltonian vector fields). The algorithms are based on the parameterization method and follow closely the proof of the KAM theorem given in [LGJV05] and [FLS07]. They essentially consist in solving a functional equation satisfied by the invariant tori by using a Newton method. Using some geometric identities, it is possible to perform a Newton step using little storage and few operations. In this paper we focus on the numerical issues of the algorithms (speed, storage and stability) and we refer to the mentioned papers for the rigorous results. We show how to compute efficiently both maximal invariant tori and whiskered tori, together with the associated invariant stable and unstable manifolds of whiskered tori. Moreover, we present fast algorithms for the iteration of the quasi-periodic cocycles and the computation of the invariant bundles, which is a preliminary step for the computation of invariant whiskered tori. Since quasi-periodic cocycles appear in other contexts, this section may be of independent interest. The numerical methods presented here allow to compute in a unified way primary and secondary invariant KAM tori. Secondary tori are invariant tori which can be contracted to a periodic orbit. We present some preliminary results that ensure that the methods are indeed implementable and fast. We postpone to a future paper optimized implementations and results on the breakdown of invariant tori.
Resumo:
Lean meat percentage (LMP) is the criterion for carcass classification and it must be measured on line objectively. The aim of this work was to compare the error of the prediction (RMSEP) of the LMP measured with the following different devices: Fat-O-Meat’er (FOM), UltraFOM (UFOM), AUTOFOM and -VCS2000. For this reason the same 99 carcasses were measured using all 4 apparatus and dissected according to the European Reference Method. Moreover a subsample of the carcasses (n=77) were fully scanned with a X-ray Computed Tomography equipment (CT). The RMSEP calculated with cross validation leave-one-out was lower for FOM and AUTOFOM (1.8% and 1.9%, respectively) and higher for UFOM and VCS2000 (2.3% for both devices). The error obtained with CT was the lowest (0.96%) in accordance with previous results, but CT cannot be used on line. It can be concluded that FOM and AUTOFOM presented better accuracy than UFOM and VCS2000.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."