950 resultados para Multi-choice aspiration levels
Resumo:
Fingerprinting is a well known approach for identifying multimedia data without having the original data present but what amounts to its essence or ”DNA”. Current approaches show insufficient deployment of three types of knowledge that could be brought to bear in providing a finger printing framework that remains effective, efficient and can accommodate both the whole as well as elemental protection at appropriate levels of abstraction to suit various Foci of Interest (FoI) in an image or cross media artefact. Thus our proposed framework aims to deliver selective composite fingerprinting that remains responsive to the requirements for protection of whole or parts of an image which may be of particularly interest and be especially vulnerable to attempts at rights violation. This is powerfully aided by leveraging both multi-modal information as well as a rich spectrum of collateral context knowledge including both image-level collaterals as well as the inevitably needed market intelligence knowledge such as customers’ social networks interests profiling which we can deploy as a crucial component of our Fingerprinting Collateral Knowledge. This is used in selecting the special FoIs within an image or other media content that have to be selectively and collaterally protected.
Resumo:
Fingerprinting is a well known approach for identifying multimedia data without having the original data present but instead what amounts to its essence or 'DNA'. Current approaches show insufficient deployment of various types of knowledge that could be brought to bear in providing a fingerprinting framework that remains effective, efficient and can accommodate both the whole as well as elemental protection at appropriate levels of abstraction to suit various Zones of Interest (ZoI) in an image or cross media artefact. The proposed framework aims to deliver selective composite fingerprinting that is powerfully aided by leveraging both multi-modal information as well as a rich spectrum of collateral context knowledge including both image-level collaterals and also the inevitably needed market intelligence knowledge such as customers' social networks interests profiling which we can deploy as a crucial component of our fingerprinting collateral knowledge.
Resumo:
Since its introduction in 1993, the Message Passing Interface (MPI) has become a de facto standard for writing High Performance Computing (HPC) applications on clusters and Massively Parallel Processors (MPPs). The recent emergence of multi-core processor systems presents a new challenge for established parallel programming paradigms, including those based on MPI. This paper presents a new Java messaging system called MPJ Express. Using this system, we exploit multiple levels of parallelism - messaging and threading - to improve application performance on multi-core processors. We refer to our approach as nested parallelism. This MPI-like Java library can support nested parallelism by using Java or Java OpenMP (JOMP) threads within an MPJ Express process. Practicality of this approach is assessed by porting to Java a massively parallel structure formation code from Cosmology called Gadget-2. We introduce nested parallelism in the Java version of the simulation code and report good speed-ups. To the best of our knowledge it is the first time this kind of hybrid parallelism is demonstrated in a high performance Java application. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Decision theory is the study of models of judgement involved in, and leading to, deliberate and (usually) rational choice. In real estate investment there are normative models for the allocation of assets. These asset allocation models suggest an optimum allocation between the respective asset classes based on the investors’ judgements of performance and risk. Real estate is selected, as other assets, on the basis of some criteria, e.g. commonly its marginal contribution to the production of a mean variance efficient multi asset portfolio, subject to the investor’s objectives and capital rationing constraints. However, decisions are made relative to current expectations and current business constraints. Whilst a decision maker may believe in the required optimum exposure levels as dictated by an asset allocation model, the final decision may/will be influenced by factors outside the parameters of the mathematical model. This paper discusses investors' perceptions and attitudes toward real estate and highlights the important difference between theoretical exposure levels and pragmatic business considerations. It develops a model to identify “soft” parameters in decision making which will influence the optimal allocation for that asset class. This “soft” information may relate to behavioural issues such as the tendency to mirror competitors; a desire to meet weight of money objectives; a desire to retain the status quo and many other non-financial considerations. The paper aims to establish the place of property in multi asset portfolios in the UK and examine the asset allocation process in practice, with a view to understanding the decision making process and to look at investors’ perceptions based on an historic analysis of market expectation; a comparison with historic data and an analysis of actual performance.
Resumo:
In order to harness the computational capacity of dissociated cultured neuronal networks, it is necessary to understand neuronal dynamics and connectivity on a mesoscopic scale. To this end, this paper uncovers dynamic spatiotemporal patterns emerging from electrically stimulated neuronal cultures using hidden Markov models (HMMs) to characterize multi-channel spike trains as a progression of patterns of underlying states of neuronal activity. However, experimentation aimed at optimal choice of parameters for such models is essential and results are reported in detail. Results derived from ensemble neuronal data revealed highly repeatable patterns of state transitions in the order of milliseconds in response to probing stimuli.
Resumo:
The paper analyses the emergence of group-specific attitudes and beliefs about tax compliance when individuals interact in a social network. It develops a model in which taxpayers possess a range of individual characteristics – including attitude to risk, potential for success in self-employment, and the weight attached to the social custom for honesty – and make an occupational choice based on these characteristics. Occupations differ in the possibility for evading tax. The social network determines which taxpayers are linked, and information about auditing and compliance is transmitted at meetings between linked taxpayers. Using agent-based simulations, the analysis demonstrates how attitudes and beliefs endogenously emerge that differ across sub-groups of the population. Compliance behaviour is different across occupational groups, and this is reinforced by the development of group-specific attitudes and beliefs. Taxpayers self-select into occupations according to the degree of risk aversion, the subjective probability of audit is sustained above the objective probability, and the weight attached to the social custom differs across occupations. These factors combine to lead to compliance levels that differ across occupations.
Resumo:
Aims: To investigate the effect of a therapeutic and sub-therapeutic chlortetracycline treatment on tetracyclineresistant Salmonella enterica serovar Typhimurium DT104 and on the commensal Escherichia coli in pig. Methods and Results: Salmonella Typhimurium DT104 was orally administered in all pigs prior to antibiotic treatment, and monitored with the native E. coli. Higher numbers of S. Typhimurium DT104 were shed from treated pigs than untreated pigs. This lasted up to 6 weeks post-treatment in the high-dose group. In this group, there was a 30% increase in E. coli with a chlortetracycline minimal inhibitory concentration (MIC) > 16 mg l(-1) and a 10% increase in E. coli with an MIC > 50 mg l(-1) during and 2 weeks post-treatment. This effect was less-pronounced in the low-dose group. PCR identified the predominant tetracycline resistance genes in the E. coli as tetA, tetB and tetC. The concentration of chlortetracycline in the pig faeces was measured by HPLC and levels reached 80 mug g(-1) faeces during treatment. Conclusion: Chlortetracycline treatment increases the proportion of resistant enteric bacteria beyond the current withdrawal time. Significance and Impact of the Study: Treated pigs are more likely to enter abattoirs with higher levels of resistant bacteria than untreated pigs promoting the risk of these moving up the food chain and infecting man.
Resumo:
From 2001, the construction of flats and high-density developments increased in England and the building of houses declined. Does this indicate a change in taste or is it a result of government planning policies? In this paper, an analysis is made of the long-term effects of the policy of constraint which has existed for the past 50 years but the increase in density is identified as occurring primarily after new, revised, planning guidance was issued in England in 2000 which discouraged low-density development. To substantiate this, it is pointed out that the change which occurred in England did not occur in Scotland where guidance was not changed to encourage high-density residential development. The conclusion that the change is the result of planning policies and not of a change in taste is confirmed by surveys of the occupants of new high-rise developments in Leeds. The new flat-dwellers were predominantly young and childless and expressed the intention, in the near future, when they could, of moving out of the city centre and into houses. From recent changes in guidance by the new coalition government, it is expected that the construction of flats in England will fall back to earlier levels over the next few years.
Resumo:
The planning of semi-autonomous vehicles in traffic scenarios is a relatively new problem that contributes towards the goal of making road travel by vehicles free of human drivers. An algorithm needs to ensure optimal real time planning of multiple vehicles (moving in either direction along a road), in the presence of a complex obstacle network. Unlike other approaches, here we assume that speed lanes are not present and that different lanes do not need to be maintained for inbound and outbound traffic. Our basic hypothesis is to carry forward the planning task to ensure that a sufficient distance is maintained by each vehicle from all other vehicles, obstacles and road boundaries. We present here a 4-layer planning algorithm that consists of road selection (for selecting the individual roads of traversal to reach the goal), pathway selection (a strategy to avoid and/or overtake obstacles, road diversions and other blockages), pathway distribution (to select the position of a vehicle at every instance of time in a pathway), and trajectory generation (for generating a curve, smooth enough, to allow for the maximum possible speed). Cooperation between vehicles is handled separately at the different levels, the aim being to maximize the separation between vehicles. Simulated results exhibit behaviours of smooth, efficient and safe driving of vehicles in multiple scenarios; along with typical vehicle behaviours including following and overtaking.
Resumo:
This study focuses on the analysis of winter (October-November-December-January-February-March; ONDJFM) storm events and their changes due to increased anthropogenic greenhouse gas concentrations over Europe. In order to assess uncertainties that are due to model formulation, 4 regional climate models (RCMs) with 5 high resolution experiments, and 4 global general circulation models (GCMs) are considered. Firstly, cyclone systems as synoptic scale processes in winter are investigated, as they are a principal cause of the occurrence of extreme, damage-causing wind speeds. This is achieved by use of an objective cyclone identification and tracking algorithm applied to GCMs. Secondly, changes in extreme near-surface wind speeds are analysed. Based on percentile thresholds, the studied extreme wind speed indices allow a consistent analysis over Europe that takes systematic deviations of the models into account. Relative changes in both intensity and frequency of extreme winds and their related uncertainties are assessed and related to changing patterns of extreme cyclones. A common feature of all investigated GCMs is a reduced track density over central Europe under climate change conditions, if all systems are considered. If only extreme (i.e. the strongest 5%) cyclones are taken into account, an increasing cyclone activity for western parts of central Europe is apparent; however, the climate change signal reveals a reduced spatial coherency when compared to all systems, which exposes partially contrary results. With respect to extreme wind speeds, significant positive changes in intensity and frequency are obtained over at least 3 and 20% of the European domain under study (35–72°N and 15°W–43°E), respectively. Location and extension of the affected areas (up to 60 and 50% of the domain for intensity and frequency, respectively), as well as levels of changes (up to +15 and +200% for intensity and frequency, respectively) are shown to be highly dependent on the driving GCM, whereas differences between RCMs when driven by the same GCM are relatively small.
Resumo:
Forests are a store of carbon and an eco-system that continually removes carbon dioxide from the atmosphere. If they are sustainably managed, the carbon store can be maintained at a constant level, while the trees removed and converted to timber products can form an additional long term carbon store. The total carbon store in the forest and associated ‘wood chain’ therefore increases over time, given appropriate management. This increasing carbon store can be further enhanced with afforestation. The UK’s forest area has increased continually since the early 1900s, although the rate of increase has declined since its peak in the late 1980s, and it is a similar picture in the rest of Europe. The increased sustainable use of timber in construction is a key market incentive for afforestation, which can make a significant contribution to reducing carbon emissions. The case study presented in this paper demonstrates the carbon benefits of a Cross Laminated Timber (CLT) solution for a multi-storey residential building in comparison with a more conventional reinforced concrete solution. The embodied carbon of the building up to completion of construction is considered, together with the stored carbon during the life of the building and the impact of different end of life scenarios. The results of the study show that the total stored carbon in the CLT structural frame is 1215tCO2 (30tCO2 per housing unit). The choice of treatment at end of life has a significant effect on the whole life embodied carbon of the CLT frame, which ranges from -1017 tCO2e for re-use to +153tCO2e for incinerate without energy recovery. All end of life scenarios considered result in lower total CO2e emissions for the CLT frame building compared with the reinforced concrete frame solution.
Resumo:
It has been suggested that the evidence used to support a decision to move our eyes and the confidence we have in that decision are derived from a common source. Alternatively, confidence may be based on further post-decisional processes. In three experiments we examined this. In Experiment 1, participants chose between two targets on the basis of varying levels of evidence (i.e., the direction of motion coherence in a Random-Dot-Kinematogram). They indicated this choice by making a saccade to one of two targets and then indicated their confidence. Saccade trajectory deviation was taken as a measure of the inhibition of the non-selected target. We found that as evidence increased so did confidence and deviations of saccade trajectory away from the non-selected target. However, a correlational analysis suggested they were not related. In Experiment 2 an option to opt-out of the choice was offered on some trials if choice proved too difficult. In this way we isolated trials on which confidence in target selection was high (i.e., when the option to opt-out was available but not taken). Again saccade trajectory deviations were found not to differ in relation to confidence. In Experiment 3 we directly manipulated confidence, such that participants had high or low task confidence. They showed no differences in saccade trajectory deviations. These results support post-decisional accounts of confidence: evidence supporting the decision to move the eyes is reflected in saccade control, but the confidence that we have in that choice is subject to further post-decisional processes.
Resumo:
This study represents the first detailed multi-proxy palaeoenvironmental investigation associated with a Late Iron Age lake-dwelling site in the eastern Baltic. The main objective was to reconstruct the environmental and vegetation dynamics associated with the establishment of the lake-dwelling and land-use during the last 2,000 years. A lacustrine sediment core located adjacent to a Late Iron Age lake-dwelling, medieval castle and Post-medieval manor was sampled in Lake Āraiši. The core was dated using spheroidal fly-ash particles and radiocarbon dating, and analysed in terms of pollen, non-pollen palynomorphs, diatoms, loss-on-ignition, magnetic susceptibility and element geochemistry. Associations between pollen and other proxies were statistically tested. During ad 1–700, the vicinity of Lake Āraiši was covered by forests and human activities were only small-scale with the first appearance of cereal pollen (Triticum and Secale cereale) after ad 400. The most significant changes in vegetation and environment occurred with the establishment of the lake-dwelling around ad 780 when the immediate surroundings of the lake were cleared for agriculture, and within the lake there were increased nutrient levels. The highest accumulation rates of coprophilous fungi coincide with the occupation of the lake-dwelling from ad 780–1050, indicating that parts of the dwelling functioned as byres for livestock. The conquest of tribal lands during the crusades resulted in changes to the ownership, administration and organisation of the land, but our results indicate that the form and type of agriculture and land-use continued much as it had during the preceding Late Iron Age.
Resumo:
Purpose – Multinationals have always needed an operating model that works – an effective plan for executing their most important activities at the right levels of their organization, whether globally, regionally or locally. The choices involved in these decisions have never been obvious, since international firms have consistently faced trade‐offs between tailoring approaches for diverse local markets and leveraging their global scale. This paper seeks a more in‐depth understanding of how successful firms manage the global‐local trade‐off in a multipolar world. Design methodology/approach – This paper utilizes a case study approach based on in‐depth senior executive interviews at several telecommunications companies including Tata Communications. The interviews probed the operating models of the companies we studied, focusing on their approaches to organization structure, management processes, management technologies (including information technology (IT)) and people/talent. Findings – Successful companies balance global‐local trade‐offs by taking a flexible and tailored approach toward their operating‐model decisions. The paper finds that successful companies, including Tata Communications, which is profiled in‐depth, are breaking up the global‐local conundrum into a set of more manageable strategic problems – what the authors call “pressure points” – which they identify by assessing their most important activities and capabilities and determining the global and local challenges associated with them. They then design a different operating model solution for each pressure point, and repeat this process as new strategic developments emerge. By doing so they not only enhance their agility, but they also continually calibrate that crucial balance between global efficiency and local responsiveness. Originality/value – This paper takes a unique approach to operating model design, finding that an operating model is better viewed as several distinct solutions to specific “pressure points” rather than a single and inflexible model that addresses all challenges equally. Now more than ever, developing the right operating model is at the top of multinational executives' priorities, and an area of increasing concern; the international business arena has changed drastically, requiring thoughtfulness and flexibility instead of standard formulas for operating internationally. Old adages like “think global and act local” no longer provide the universal guidance they once seemed to.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.