85 resultados para Complexity of Relations
Resumo:
This article argues that a native-speaker baseline is a neglected dimension of studies into second language (L2) performance. If we investigate how learners perform language tasks, we should distinguish what performance features are due to their processing an L2 and which are due to their performing a particular task. Having defined what we mean by “native speaker,” we present the background to a research study into task features on nonnative task performance, designed to include native-speaker data as a baseline for interpreting nonnative-speaker performance. The nonnative results, published in this journal (Tavakoli & Foster, 2008) are recapitulated and then the native-speaker results are presented and discussed in the light of them. The study is guided by the assumption that limited attentional resources impact on L2 performance and explores how narrative design features—namely complexity of storyline and tightness of narrative structure— affect complexity, fluency, accuracy, and lexical diversity in language. The results show that both native and nonnative speakers are prompted by storyline complexity to use more subordinated language, but narrative structure had different effects on native and nonnative fluency. The learners, who were based in either London or Tehran, did not differ in their performance when compared to each other, except in lexical diversity, where the learners in London were close to native-speaker levels. The implications of the results for the applicability of Levelt’s model of speaking to an L2 are discussed, as is the potential for further L2 research using native speakers as a baseline.
Resumo:
The overarching aim of the research reported here was to investigate the effects of task structure and storyline complexity of oral narrative tasks on second language task performance. Participants were 60 Iranian language learners of English who performed six narrative tasks of varying degree of structure and storyline complexity in an assessment setting. A number of analytic detailed measures were employed to examine whether there were any differences in the participants’ performances elicited by the different tasks in terms of their accuracy, fluency, syntactic complexity and lexical diversity. Results of the data analysis showed that performance in the more structured tasks was more accurate and to a great extent more fluent than that in the less structured tasks. The results further revealed that syntactic complexity of L2 performance was related to the storyline complexity, i.e. more syntactic complexity was associated with narratives that had both foreground and background storylines. These findings strongly suggest that there is some unsystematic variance in the participants’ performance triggered by the different aspects of task design.
Resumo:
Control and optimization of flavor is the ultimate challenge for the food and flavor industry. The major route to flavor formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compd. The complexity of the reaction means that researchers turn to kinetic modeling in order to understand the control points of the reaction and to manipulate the flavor profile. Studies of the kinetics of flavor formation have developed over the past 30 years from single- response empirical models of binary aq. systems to sophisticated multi-response models in food matrixes, based on the underlying chem., with the power to predict the formation of some key aroma compds. This paper discusses in detail the development of kinetic models of thermal generation of flavor and looks at the challenges involved in predicting flavor.
Resumo:
Purpose: Increasing costs of health care, fuelled by demand for high quality, cost-effective healthcare has drove hospitals to streamline their patient care delivery systems. One such systematic approach is the adaptation of Clinical Pathways (CP) as a tool to increase the quality of healthcare delivery. However, most organizations still rely on are paper-based pathway guidelines or specifications, which have limitations in process management and as a result can influence patient safety outcomes. In this paper, we present a method for generating clinical pathways based on organizational semiotics by capturing knowledge from syntactic, semantic and pragmatic to social level. Design/methodology/approach: The proposed modeling approach to generation of CPs adopts organizational semiotics and enables the generation of semantically rich representation of CP knowledge. Semantic Analysis Method (SAM) is applied to explicitly represent the semantics of the concepts, their relationships and patterns of behavior in terms of an ontology chart. Norm Analysis Method (NAM) is adopted to identify and formally specify patterns of behavior and rules that govern the actions identified on the ontology chart. Information collected during semantic and norm analysis is integrated to guide the generation of CPs using best practice represented in BPMN thus enabling the automation of CP. Findings: This research confirms the necessity of taking into consideration social aspects in designing information systems and automating CP. The complexity of healthcare processes can be best tackled by analyzing stakeholders, which we treat as social agents, their goals and patterns of action within the agent network. Originality/value: The current modeling methods describe CPs from a structural aspect comprising activities, properties and interrelationships. However, these methods lack a mechanism to describe possible patterns of human behavior and the conditions under which the behavior will occur. To overcome this weakness, a semiotic approach to generation of clinical pathway is introduced. The CP generated from SAM together with norms will enrich the knowledge representation of the domain through ontology modeling, which allows the recognition of human responsibilities and obligations and more importantly, the ultimate power of decision making in exceptional circumstances.
Resumo:
Mean field models (MFMs) of cortical tissue incorporate salient, average features of neural masses in order to model activity at the population level, thereby linking microscopic physiology to macroscopic observations, e.g., with the electroencephalogram (EEG). One of the common aspects of MFM descriptions is the presence of a high-dimensional parameter space capturing neurobiological attributes deemed relevant to the brain dynamics of interest. We study the physiological parameter space of a MFM of electrocortical activity and discover robust correlations between physiological attributes of the model cortex and its dynamical features. These correlations are revealed by the study of bifurcation plots, which show that the model responses to changes in inhibition belong to two archetypal categories or “families”. After investigating and characterizing them in depth, we discuss their essential differences in terms of four important aspects: power responses with respect to the modeled action of anesthetics, reaction to exogenous stimuli such as thalamic input, and distributions of model parameters and oscillatory repertoires when inhibition is enhanced. Furthermore, while the complexity of sustained periodic orbits differs significantly between families, we are able to show how metamorphoses between the families can be brought about by exogenous stimuli. We here unveil links between measurable physiological attributes of the brain and dynamical patterns that are not accessible by linear methods. They instead emerge when the nonlinear structure of parameter space is partitioned according to bifurcation responses. We call this general method “metabifurcation analysis”. The partitioning cannot be achieved by the investigation of only a small number of parameter sets and is instead the result of an automated bifurcation analysis of a representative sample of 73,454 physiologically admissible parameter sets. Our approach generalizes straightforwardly and is well suited to probing the dynamics of other models with large and complex parameter spaces.
Resumo:
The bewildering complexity of cortical microcircuits at the single cell level gives rise to surprisingly robust emergent activity patterns at the level of laminar and columnar local field potentials (LFPs) in response to targeted local stimuli. Here we report the results of our multivariate data-analytic approach based on simultaneous multi-site recordings using micro-electrode-array chips for investigation of the microcircuitary of rat somatosensory (barrel) cortex. We find high repeatability of stimulus-induced responses, and typical spatial distributions of LFP responses to stimuli in supragranular, granular, and infragranular layers, where the last form a particularly distinct class. Population spikes appear to travel with about 33 cm/s from granular to infragranular layers. Responses within barrel related columns have different profiles than those in neighbouring columns to the left or interchangeably to the right. Variations between slices occur, but can be minimized by strictly obeying controlled experimental protocols. Cluster analysis on normalized recordings indicates specific spatial distributions of time series reflecting the location of sources and sinks independent of the stimulus layer. Although the precise correspondences between single cell activity and LFPs are still far from clear, a sophisticated neuroinformatics approach in combination with multi-site LFP recordings in the standardized slice preparation is suitable for comparing normal conditions to genetically or pharmacologically altered situations based on real cortical microcircuitry.
Resumo:
The great majority of plant species in the tropics require animals to achieve pollination, but the exact role of floral signals in attraction of animal pollinators is often debated. Many plants provide a floral reward to attract a guild of pollinators, and it has been proposed that floral signals of non-rewarding species may converge on those of rewarding species to exploit the relationship of the latter with their pollinators. In the orchid family (Orchidaceae), pollination is almost universally animal-mediated, but a third of species provide no floral reward, which suggests that deceptive pollination mechanisms are prevalent. Here, we examine floral colour and shape convergence in Neotropical plant communities, focusing on certain food-deceptive Oncidiinae orchids (e.g. Trichocentrum ascendens and Oncidium nebulosum) and rewarding species of Malpighiaceae. We show that the species from these two distantly related families are often more similar in floral colour and shape than expected by chance and propose that a system of multifarious floral mimicry—a form of Batesian mimicry that involves multiple models and is more complex than a simple one model–one mimic system—operates in these orchids. The same mimetic pollination system has evolved at least 14 times within the species-rich Oncidiinae throughout the Neotropics. These results help explain the extraordinary diversification of Neotropical orchids and highlight the complexity of plant–animal interactions.
Resumo:
Approaches to natural resource management emphasise the importance of involving local people and institutions in order to build capacity, limit costs, and achieve environmental sustainability. Governments worldwide, often encouraged by international donors, have formulated devolution policies and legal instruments that provide an enabling environment for devolved natural resource management. However, implementation of these policies reveals serious challenges. This article explores the effects of limited involvement of local people and institutions in policy development and implementation. An in-depth study of the Forest Policy of Malawi and Village Forest Areas in the Lilongwe district provides an example of externally driven policy development which seeks to promote local management of natural resources. The article argues that policy which has weak ownership by national government and does not adequately consider the complexity of local institutions, together with the effects of previous initiatives on them, can create a cumulative legacy through which destructive resource use practices and social conflict may be reinforced. In short, poorly developed and implemented community based natural resource management policies can do considerably more harm than good. Approaches are needed that enable the policy development process to embed an in-depth understanding of local institutions whilst incorporating flexibility to account for their location-specific nature. This demands further research on policy design to enable rigorous identification of positive and negative institutions and ex-ante exploration of the likely effects of different policy interventions.
Resumo:
Aim: To examine the causes of prescribing and monitoring errors in English general practices and provide recommendations for how they may be overcome. Design: Qualitative interview and focus group study with purposive sampling and thematic analysis informed by Reason’s accident causation model. Participants: General practice staff participated in a combination of semi-structured interviews (n=34) and six focus groups (n=46). Setting: Fifteen general practices across three primary care trusts in England. Results: We identified seven categories of high-level error-producing conditions: the prescriber, the patient, the team, the task, the working environment, the computer system, and the primary-secondary care interface. Each of these was further broken down to reveal various error-producing conditions. The prescriber’s therapeutic training, drug knowledge and experience, knowledge of the patient, perception of risk, and their physical and emotional health, were all identified as possible causes. The patient’s characteristics and the complexity of the individual clinical case were also found to have contributed to prescribing errors. The importance of feeling comfortable within the practice team was highlighted, as well as the safety of general practitioners (GPs) in signing prescriptions generated by nurses when they had not seen the patient for themselves. The working environment with its high workload, time pressures, and interruptions, and computer related issues associated with mis-selecting drugs from electronic pick-lists and overriding alerts, were all highlighted as possible causes of prescribing errors and often interconnected. Conclusion: This study has highlighted the complex underlying causes of prescribing and monitoring errors in general practices, several of which are amenable to intervention.
Resumo:
For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.
Resumo:
This article examines the marginal position of artisanal miners in sub-Saharan Africa, and considers how they are incorporated into mineral sector change in the context of institutional and legal integration. Taking the case of diamond and gold mining in Tanzania, the concept of social exclusion is used to explore the consequences of marginalization on people's access to mineral resources and ability to make a living from artisanal mining. Because existing inequalities and forms of discrimination are ignored by the Tanzanian state, the institutionalization of mineral titles conceals social and power relations that perpetuate highly unequal access to resources. The article highlights the complexity of these processes, and shows that while legal integration can benefit certain wealthier categories of people, who fit into the model of an 'entrepreneurial small-scale miner', for others adverse incorporation contributes to socio-economic dependence, exploitation and insecurity. For the issue of marginality to be addressed within integration processes, the existence of local forms of organization, institutions and relationships, which underpin inequalities and discrimination, need to be recognized.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr�-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m�-2 with 90% uncertainty bounds of (+0.08, +1.27)Wm�-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m�-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m�-2 with 90% uncertainty bounds of +0.17 to +2.1 W m�-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m�-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (�-0.50 to +1.08) W m-�2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (�-0.06 W m�-2 with 90% uncertainty bounds of �-1.45 to +1.29 W m�-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.
Resumo:
Construction professional service (CPS) firms sell expertise and provide innovative solutions for projects founded on their knowledge, experience, and technical competences. Large CPS firms seeking to grow will often seek new opportunities in their domestic market and overseas by organic or inorganic growth through mergers, alliances, and acquisitions. Growth can also come from increasing market penetration through vertical, horizontal, and lateral diversification. Such growth, hopefully, leads to economies of scope and scale in the long term, but it can also lead to diseconomies, when the added cost of integration and the increased complexity of diversification no longer create tangible and intangible benefits. The aim of this research is to investigate the key influences impacting on the growth in scope and scale for large CPS firms. Qualitative data from the interviews were underpinned by secondary data from CPS firms’ annual reports and analysts’ findings. The findings showed five key influences on the scope and scale of a CPS firm: the importance of growth as a driver; the influence of the ownership of the firm on the decision for growth in scope and scale; the optimization of resources and capabilities; the need to serve changing clients’ needs; and the importance of localization. The research provides valuable insights into the growth strategies of international CPS firms. A major finding of the research is the influence of ownership on CPS firms’ growth strategies which has not been highlighted in previous research.
Resumo:
The present study aims to contribute to an understanding of the complexity of lobbying activities within the accounting standard-setting process in the UK. The paper reports detailed content analysis of submission letters to four related exposure drafts. These preceded two accounting standards that set out the concept of control used to determine the scope of consolidation in the UK, except for reporting under international standards. Regulation on the concept of control provides rich patterns of lobbying behaviour due to its controversial nature and its significance to financial reporting. Our examination is conducted by dividing lobbyists into two categories, corporate and non-corporate, which are hypothesised (and demonstrated) to lobby differently. In order to test the significance of these differences we apply ANOVA techniques and univariate regression analysis. Corporate respondents are found to devote more attention to issues of specific applicability of the concept of control, whereas non-corporate respondents tend to devote more attention to issues of general applicability of this concept. A strong association between the issues raised by corporate respondents and their line of business is revealed. Both categories of lobbyists are found to advance conceptually-based arguments more often than economic consequences-based or combined arguments. However, when economic consequences-based arguments are used, they come exclusively from the corporate category of respondents.