80 resultados para classic
Resumo:
Market failure can be corrected using different regulatory approaches ranging from high to low intervention. Recently, classic regulations have been criticized as costly and economically irrational and thus policy makers are giving more consideration to soft regulatory techniques such as information remedies. However, despite the plethora of food information conveyed by different media there appears to be a lack of studies exploring how consumers evaluate this information and how trust towards publishers influence their choices for food information. In order to fill such a gap, this study investigates questions related to topics which are more relevant to consumers, who should disseminate trustful food information, and how communication should be conveyed and segmented. Primary data were collected both through qualitative (in depth interviews and focus groups) and quantitative research (web and mail surveys). Attitudes, willingness to pay for food information and trust towards public and private sources conveying information through a new food magazine were assessed using both multivariate statistical methods and econometric analysis. The study shows that consumer attitudes towards food information topics can be summarized along three cognitive-affective dimensions: the agro-food system, enjoyment and wellness. Information related to health risks caused by nutritional disorders and food safety issues caused by bacteria and chemical substances is the most important for about 90% of respondents. Food information related to regulations and traditions is also considered important for more than two thirds of respondents, while information about food production and processing techniques, life style and food fads are considered less important by the majority of respondents. Trust towards food information disseminated by public bodies is higher than that observed for private bodies. This behavior directly affects willingness to pay (WTP) for food information provided by public and private publishers when markets are shocked by a food safety incident. WTP for consumer association (€ 1.80) and the European Food Safety Authority (€ 1.30) are higher than WTP for the independent and food industry publishers which cluster around zero euro. Furthermore, trust towards the type of publisher also plays a key role in food information market segmentation together with socio-demographic and economic variables such as gender, age, presence of children and income. These findings invite policy makers to reflect on the possibility of using information remedies conveyed using trusted sources of information to specific segments of consumers as an interesting soft alternative to the classic way of regulating modern food markets.
Resumo:
BACKGROUND. To use spectra acquired by matrix-assisted laser desorption/ionization (MALDI) mass spectrometry (MS) from pre- and post-digital rectal examination (DRE) urine samples to search for discriminating peaks that can adequately distinguish between benign and malignant prostate conditions, and identify the peaks’ underlying biomolecules. METHODS. Twenty-five participants with prostate cancer (PCa) and 27 participants with a variety of benign prostatic conditions as confirmed by a 10-core tissue biopsy were included. Pre- and post-DRE urine samples were prepared for MALDI MS profiling using an automated clean-up procedure. Following mass spectra collection and processing, peak mass and intensity were extracted and subjected to statistical analysis to identify peaks capable of distinguishing between benign and cancer. Logistic regression was used to combine markers to create a sensitive and specific test. RESULTS. A peak at m/z 10,760 was identified as b-microseminoprotein (b-MSMB) and found to be statistically lower in urine from PCa participants using the peak’s average areas. By combining serum prostate-specific antigen (PSA) levels with MALDI MS-measured b-MSMB levels, optimum threshold values obtained from Receiver Operator characteristics curves gave an increased sensitivity of 96% at a specificity of 26%. CONCLUSIONS. These results demonstrate that with a simple sample clean-up followed by MALDI MS profiling, significant differences of MSMB abundance were found in post-DRE urine samples. In combination with PSA serum levels, obtained from a classic clinical assay led to high classification accuracy for PCa in the studied sample set. Our results need to be validated in a larger multicenter prospective randomized clinical trial.
Resumo:
Wernicke’s aphasia occurs following a stroke to classical language comprehension regions in the left temporoparietal cortex. Consequently, auditory-verbal comprehension is significantly impaired in Wernicke’s aphasia but the capacity to comprehend visually presented materials (written words and pictures) is partially spared. This study used fMRI to investigate the neural basis of written word and picture semantic processing in Wernicke’s aphasia, with the wider aim of examining how the semantic system is altered following damage to the classical comprehension regions. Twelve participants with Wernicke’s aphasia and twelve control participants performed semantic animate-inanimate judgements and a visual height judgement baseline task. Whole brain and ROI analysis in Wernicke’s aphasia and control participants found that semantic judgements were underpinned by activation in the ventral and anterior temporal lobes bilaterally. The Wernicke’s aphasia group displayed an “over-activation” in comparison to control participants, indicating that anterior temporal lobe regions become increasingly influential following reduction in posterior semantic resources. Semantic processing of written words in Wernicke’s aphasia was additionally supported by recruitment of the right anterior superior temporal lobe, a region previously associated with recovery from auditory-verbal comprehension impairments. Overall, the results concord with models which indicate that the anterior temporal lobes are crucial for multimodal semantic processing and that these regions may be accessed without support from classic posterior comprehension regions.
Resumo:
The inclusion of the direct and indirect radiative effects of aerosols in high-resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three-dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing long-wave radiation over West Africa due to a better representation of dust. However, uncertainties in dust optical properties propagate to its direct effect and the subsequent model response. Inclusion of the indirect aerosol effects improves surface radiation biases at the North Slope of Alaska ARM site due to lower cloud amounts in high-latitude clean-air regions. This leads to improved temperature and height forecasts in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short-range forecasts. However, the indirect aerosol effect leads to a strengthening of the low-level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. Regional impacts on the African Easterly Jet (AEJ) are also presented with the large dust loading in the aerosol climatology enhancing of the heat low over West Africa and weakening the AEJ. This study highlights the importance of including a more realistic treatment of aerosol–cloud interactions in global NWP models and the potential for improved global environmental prediction systems through the incorporation of more complex aerosol schemes.
Resumo:
Superposed epoch studies have been carried out in order to determine the ionospheric response at mid-latitudes to southward turnings of the interplanetary magnetic field (IMF). This is compared with the geomagnetic response, as seen in the indices K p, AE and Dst. The solar wind, IMF and geomagnetic data used were hourly averages from the years 1967–1989 and thus cover a full 22-year cycle in the solar magnetic field. These data were divided into subsets, determined by the magnitudes of the southward turnings and the concomitant increase in solar wind pressure. The superposed epoch studies were carried out using the time of the southward turning as time zero. The response of the mid-latitude ionosphere is studied by looking at the F-layer critical frequencies, f o F2, from hourly soundings by the Slough ionosonde and their deviation from the monthly median values, δf o F2. For the southward turnings with a change in B z of δB z > 11.5 nT accompanied by a solar wind dynamic pressure P exceeding 5 nPa, the F region critical frequency, f o F2, shows a marked decrease, reaching a minimum value about 20 h after the southward turning. This recovers to pre-event values over the subsequent 24 h, on average. The Dst index shows the classic storm-time decrease to about −60 nT. Four days later, the index has still to fully recover and is at about −25 nT. Both the K p and AE indices show rises before the southward turnings, when the IMF is strongly northward but the solar wind dynamic pressure is enhanced. The average AE index does register a clear isolated pulse (averaging 650 nT for 2 h, compared with a background peak level of near 450 nT at these times) showing enhanced energy deposition at high latitudes in substorms but, like K p, remains somewhat enhanced for several days, even after the average IMF has returned to zero after 1 day. This AE background decays away over several days as the Dst index recovers, indicating that there is some contamination of the currents observed at the AE stations by the continuing enhanced equatorial ring current. For data averaged over all seasons, the critical frequencies are depressed at Slough by 1.3 MHz, which is close to the lower decile of the overall distribution of δf o Fl values. Taking 30-day periods around summer and winter solstice, the largest depression is 1.6 and 1.2 MHz, respectively. This seasonal dependence is confirmed by a similar study for a Southern Hemisphere station, Argentine Island, giving peak depressions of 1.8 MHz and 0.5 MHz for summer and winter. For the subset of turnings where δB z > 11.5 nT and P ≤ 5 nPa, the response of the geomagnetic indices is similar but smaller, while the change in δf o F2 has all but disappeared. This confirms that the energy deposited at high latitudes, which leads to the geomagnetic and ionospheric disturbances following a southward turning of the IMF, increases with the energy density (dynamic pressure) of the solar wind flow. The magnitude of all responses are shown to depend on δB z . At Slough, the peak depression always occurs when Slough rotates into the noon sector. The largest ionospheric response is for southward turnings seen between 15–21 UT.
Resumo:
Background Autism Spectrum Conditions (ASC) are neurodevelopmental conditions characterized by difficulties in communication and social interaction, alongside unusually repetitive behaviours and narrow interests. Asperger Syndrome (AS) is one subgroup of ASC and differs from classic autism in that in AS there is no language or general cognitive delay. Genetic, epigenetic and environmental factors are implicated in ASC and genes involved in neural connectivity and neurodevelopment are good candidates for studying the susceptibility to ASC. The aryl-hydrocarbon receptor nuclear translocator 2 (ARNT2) gene encodes a transcription factor involved in neurodevelopmental processes, neuronal connectivity and cellular responses to hypoxia. A mutation in this gene has been identified in individuals with ASC and single nucleotide polymorphisms (SNPs) have been nominally associated with AS and autistic traits in previous studies. Methods In this study, we tested 34 SNPs in ARNT2 for association with AS in 118 cases and 412 controls of Caucasian origin. P values were adjusted for multiple comparisons, and linkage disequilibrium (LD) among the SNPs analysed was calculated in our sample. Finally, SNP annotation allowed functional and structural analyses of the genetic variants in ARNT2. We tested the replicability of our result using the genome-wide association studies (GWAS) database of the Psychiatric Genomics Consortium (PGC). Results We report statistically significant association of rs17225178 with AS. This SNP modifies transcription factor binding sites and regions that regulate the chromatin state in neural cell lines. It is also included in a LD block in our sample, alongside other genetic variants that alter chromatin regulatory regions in neural cells. Conclusions These findings demonstrate that rs17225178 in the ARNT2 gene is associated with AS and support previous studies that pointed out an involvement of this gene in the predisposition to ASC.
Resumo:
The classic vertical advection-diffusion (VAD) balance is a central concept in studying the ocean heat budget, in particular in simple climate models (SCMs). Here we present a new framework to calibrate the parameters of the VAD equation to the vertical ocean heat balance of two fully-coupled climate models that is traceable to the models’ circulation as well as to vertical mixing and diffusion processes. Based on temperature diagnostics, we derive an effective vertical velocity w∗ and turbulent diffusivity k∗ for each individual physical process. In steady-state, we find that the residual vertical velocity and diffusivity change sign in mid-depth, highlighting the different regional contributions of isopycnal and diapycnal diffusion in balancing the models’ residual advection and vertical mixing. We quantify the impacts of the time-evolution of the effective quantities under a transient 1%CO2 simulation and make the link to the parameters of currently employed SCMs.
Resumo:
Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pretraining of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.
Resumo:
We monitored 8- and 10-year-old children’s eye movements as they read sentences containing a temporary syntactic ambiguity to obtain a detailed record of their online processing. Children showed the classic garden-path effect in online processing. Their reading was disrupted following disambiguation, relative to control sentences containing a comma to block the ambiguity, although the disruption occurred somewhat later than would be expected for mature readers. We also asked children questions to probe their comprehension of the syntactic ambiguity offline. They made more errors following ambiguous sentences than following control sentences, demonstrating that the initial incorrect parse of the garden-path sentence influenced offline comprehension. These findings are consistent with “good enough” processing effects seen in adults. While faster reading times and more regressions were generally associated with better comprehension, spending longer reading the question predicted comprehension success specifically in the ambiguous condition. This suggests that reading the question prompted children to reconstruct the sentence and engage in some form of processing, which in turn increased the likelihood of comprehension success. Older children were more sensitive to the syntactic function of commas, and, overall, they were faster and more accurate than younger children.
Resumo:
We provide a new legal perspective for the antitrust analysis of margin squeeze conducts. Building on recent economic analysis, we explain why margin squeeze conducts should solely be evaluated under adjusted predatory pricing standards. The adjustment corresponds to an increase in the cost benchmark used in the predatory pricing test by including opportunity costs due to missed upstream sales. This can reduce both the risks of false-positives and false-negatives in margin squeeze cases. We justify this approach by explaining why classic arguments against above-cost predatory pricing typically do not hold in vertical structures where margin squeezes take place and by presenting case law evidence supporting this adjustment. Our approach can help to reconcile the divergent US and EU antitrust stances on margin squeeze.
Resumo:
Purpose – The purpose of this paper is to introduce the debate forum on internationalization motives of this special issue of Multinational Business Review. Design/methodology/approach – The authors reflect on the background and evolution of the internationalization motives over the past few decades, and then provide suggestions for how to use the motives for future analyses. The authors also reflect on the contributions to the debate of the accompanying articles of the forum. Findings – There continue to be new developments in the way in which firms organize themselves as multinational enterprises (MNEs), and this implies that the “classic” motives originally introduced by Dunning in 1993 need to be revisited. Dunning’s motives and arguments were deductive and atheoretical, and these were intended to be used as a toolkit, used in conjunction with other theories and frameworks. They are not an alternative to a classification of possible MNE strategies. Originality/value – This paper and the ones that accompany it, provide a deeper and nuanced understanding on internationalization motives for future research to build on.
Resumo:
Although medieval rentals have been extensively studied, few scholars have used them to analyse variations in the rents paid on individual properties within a town. It has been claimed that medieval rents did not reflect economic values or market forces, but were set according to social and political rather than economic criteria, and remained ossified at customary levels. This paper uses hedonic regression methods to test whether property rents in medieval Gloucester were influenced by classic economic factors such as the location and use of a property. It investigates both rents and local rates (landgavel), and explores the relationship between the two. It also examines spatial autocorrelation. It finds significant relationships between urban rents and property characteristics that are similar to those found in modern studies. The findings are consistent with the view that, in Gloucester at least, medieval rents were strongly influenced by classical economic factors working through a competitive urban property market.
Resumo:
The Bloom filter is a space efficient randomized data structure for representing a set and supporting membership queries. Bloom filters intrinsically allow false positives. However, the space savings they offer outweigh the disadvantage if the false positive rates are kept sufficiently low. Inspired by the recent application of the Bloom filter in a novel multicast forwarding fabric, this paper proposes a variant of the Bloom filter, the optihash. The optihash introduces an optimization for the false positive rate at the stage of Bloom filter formation using the same amount of space at the cost of slightly more processing than the classic Bloom filter. Often Bloom filters are used in situations where a fixed amount of space is a primary constraint. We present the optihash as a good alternative to Bloom filters since the amount of space is the same and the improvements in false positives can justify the additional processing. Specifically, we show via simulations and numerical analysis that using the optihash the false positives occurrences can be reduced and controlled at a cost of small additional processing. The simulations are carried out for in-packet forwarding. In this framework, the Bloom filter is used as a compact link/route identifier and it is placed in the packet header to encode the route. At each node, the Bloom filter is queried for membership in order to make forwarding decisions. A false positive in the forwarding decision is translated into packets forwarded along an unintended outgoing link. By using the optihash, false positives can be reduced. The optimization processing is carried out in an entity termed the Topology Manger which is part of the control plane of the multicast forwarding fabric. This processing is only carried out on a per-session basis, not for every packet. The aim of this paper is to present the optihash and evaluate its false positive performances via simulations in order to measure the influence of different parameters on the false positive rate. The false positive rate for the optihash is then compared with the false positive probability of the classic Bloom filter.
Resumo:
Classic financial agency theory recommends compensation through stock options rather than shares to counteract excessive risk aversion in agents. In a setting where any kind of risk taking is suboptimal for shareholders, we show that excessive risk taking may occur for one of two reasons: risk preferences or incentives. Even when compensated through restricted company stock, experimental CEOs take large amounts of excessive risk. This contradicts classical financial theory, but can be explained through risk preferences that are not uniform over the probability and outcome spaces, and in particular, risk seeking for small probability gains and large probability losses. Compensation through options further increases risk taking as expected. We show that this effect is driven mainly by the personal asset position of the experimental CEO, thus having deleterious effects on company performance.
Resumo:
Much of the work in intercultural communication studies in the past decade, especially in the field of applied linguistics, has been devoted to ‘disinventing’ the notion of culture. The problem with the word ‘culture’ as it has been used in anthropology, sociology, and in everyday life, it has been pointed out, is that it is used as a noun, conceived of as something ‘solid,’ an essential set of traits or characteristics of certain people or groups, something people ‘have’ rather than something they ‘do’ (Scollon, Scollon, & Jones, 2012). Among the most famous statements of this position is Brain Street’s classic paper ‘Culture is a Verb’ (1993), in which he argues that culture should be treated as ‘a signifying process the active construction of meaning rather than the static and reified or nominalizing’ sense in which the word is often used in anthropology, some linguistics circles, and in everyday conversation.