995 resultados para Response complexity
Resumo:
Understanding the influence of pore space characteristics on the hydraulic conductivity and spectral induced polarization (SIP) response is critical for establishing relationships between the electrical and hydrological properties of surficial unconsolidated sedimentary deposits, which host the bulk of the world's readily accessible groundwater resources. Here, we present the results of laboratory SIP measurements on industrial-grade, saturated quartz samples with granulometric characteristics ranging from fine sand to fine gravel, which can be regarded as proxies for widespread alluvial deposits. We altered the pore space characteristics by changing (i) the grain size spectra, (ii) the degree of compaction, and (iii) the level of sorting. We then examined how these changes affect the SIP response, the hydraulic conductivity, and the specific surface area of the considered samples. In general, the results indicate a clear connection between the SIP response and the granulometric as well as pore space characteristics. In particular, we observe a systematic correlation between the hydraulic conductivity and the relaxation time of the Cole-Cole model describing the observed SIP effect for the entire range of considered grain sizes. The results do, however, also indicate that the detailed nature of these relations depends strongly on variations in the pore space characteristics, such as, for example, the degree of compaction. The results of this study underline the complexity of the origin of the SIP signal as well as the difficulty to relate it to a single structural factor of a studied sample, and hence raise some fundamental questions with regard to the practical use of SIP measurements as site- and/or sample-independent predictors of the hydraulic conductivity.
Resumo:
The functional method is a new test theory using a new scoring method that assumes complexity in test structure, and thus takes into account every correlation between factors and items. The main specificity of the functional method is to model test scores by multiple regression instead of estimating them by using simplistic sums of points. In order to proceed, the functional method requires the creation of hyperspherical measurement space, in which item responses are expressed by their correlation with orthogonal factors. This method has three main qualities. First, measures are expressed in the absolute metric of correlations; therefore, items, scales and persons are expressed in the same measurement space using the same single metric. Second, factors are systematically orthogonal and without errors, which is optimal in order to predict other outcomes. Such predictions can be performed to estimate how one would answer to other tests, or even to model one's response strategy if it was perfectly coherent. Third, the functional method provides measures of individuals' response validity (i.e., control indices). Herein, we propose a standard procedure in order to identify whether test results are interpretable and to exclude invalid results caused by various response biases based on control indices.
Resumo:
Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.
Resumo:
Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.
Resumo:
Vam monitoritzar paràmetres físics i químics, macroinvertebrats bentònics, clorofil·la a, productors primaris i matèria orgànica durant un any (2001-2002) per examinar els efectes d'una font puntual sobre la composició taxonòmica, la estructura de la comunitat, l'organització funcional, la utilització de l'habitat i la estoquiometria al riu la Tordera (Catalunya). Aigües avall de la font puntual, concentració de nutrients, cabal i conductivitat eren majors que al tram d'aigües amunt, mentre que oxigen dissolt era menor. La densitat de macroinvertebrats era més elevada al tram d'aigües avall però la biomassa era similar als dos trams. La riquesa taxonòmica al tram de dalt era un 20% més alt que al tram de baix. Els anàlisis d'ordenació separen clarament els dos trams en el primer eix, mentre que els dos trams presentaven una pauta temporal similar en el segon eix. La similaritat entre els dos trams en composició taxonòmica, densitats i biomasses després de les crescudes d'abril i maig de 2002, indiquen que les pertorbacions del cabal poden actuar com a un mecanisme de reinici de la comunitat bentònica i jugar un paper important per a la restauració d'ecosistemes fluvials. Els dos trams presentaven una biomassa de perifiton, plantes vasculars, CPOM i FPOM similars, mentre que clorofil·la a, algues filamentoses, molses i SPOM eren majors al tram d'aigües avall. La densitat relativa de trituradors era menor sota la font puntual mentre que col·lectors i filtradors van ser afavorits. La biomassa relativa de trituradors també era menor sota la font puntual, però la biomassa de col·lectors i depredadors va augmentar. Les relacions entre densitat de grups tròfics i els seus recursos eren rarament significatives. La relació s'explicava millor amb la biomassa de macroinvertebrats. Els dos trams compartien la mateixa relació per raspadors, col·lectors i filtradors però no per trituradors i depredadors. La densitat i la biomassa de macroinvertebrats es trobaven positivament correlacionades amb la quantitat de recursos tròfics i la complexitat d'habitat, mentre que la riquesa taxonòmica es trobava negativament relacionada amb paràmetres hidràulics. La influència dels substrats inorgànics prenia menor rellevància per a la distribució dels macroinvertebrats. Els anàlisis d'ordenació mostren com les variables del microhabitat de major rellevància eren CPOM, clorofil·la a, algues filamentoses i velocitat. La cobertura de sorra només era significativa per al tram d'aigües amunt i les molses, al d'aigües avall. El número de correlacions significatives entre macroinvertebrats i les variables del microhabitat era més elevat per al tram de dalt que per al de baix, bàsicament per diferències en composició taxonòmica. La biomassa de macroinvertebrats va aportar una informació semblant a la obtinguda per la densitat. Perifiton i molses tenien uns continguts de nutrients similars en els dos trams. Els %C i %N d'algues filamentoses també eren similars en els dos trams però el %P sota la font puntual era el doble que al tram de dalt. Les relacions estoquiomètriques en CPOM, FPOM i SPOM eren considerablement menors sota la font puntual. Els continguts elementals i relacions van ser molt variables entre taxons de macroinvertebrats però no van resultar significativament diferents entre els dos trams. Dípters, tricòpters i efemeròpters presentaven una estoquiometria similar, mentre que el C i el N eren inferiors en moluscs i el P en coleòpters. Els depredadors presentaven un contingut en C i N més elevat que la resta de grups tròfics, mentre que el P era major en els filtradors. Els desequilibris elementals entre consumidors i recursos eren menors en el tram d'aigües avall. A la tardor i l'hivern la major font de nutrients va ser la BOM mentre que a la primavera i a l'estiu va ser el perifiton.
Resumo:
Why are humans musical? Why do people in all cultures sing or play instruments? Why do we appear to have specialized neurological apparatus for hearing and interpreting music as distinct from other sounds? And how does our musicality relate to language and to our evolutionary history? Anthropologists and archaeologists have paid little attention to the origin of music and musicality — far less than for either language or ‘art’. While art has been seen as an index of cognitive complexity and language as an essential tool of communication, music has suffered from our perception that it is an epiphenomenal ‘leisure activity’, and archaeologically inaccessible to boot. Nothing could be further from the truth, according to Steven Mithen; music is integral to human social life, he argues, and we can investigate its ancestry with the same rich range of analyses — neurological, physiological, ethnographic, linguistic, ethological and even archaeological — which have been deployed to study language. In The Singing Neanderthals Steven Mithen poses these questions and proposes a bold hypothesis to answer them. Mithen argues that musicality is a fundamental part of being human, that this capacity is of great antiquity, and that a holistic protolanguage of musical emotive expression predates language and was an essential precursor to it. This is an argument with implications which extend far beyond the mere origins of music itself into the very motives of human origins. Any argument of such range is bound to attract discussion and critique; we here present commentaries by archaeologists Clive Gamble and Iain Morley and linguists Alison Wray and Maggie Tallerman, along with Mithen's response to them. Whether right or wrong, Mithen has raised fascinating and important issues. And it adds a great deal of charm to the time-honoured, perhaps shopworn image of the Neanderthals shambling ineffectively through the pages of Pleistocene prehistory to imagine them humming, crooning or belting out a cappella harmonies as they went.
Resumo:
With the rapid development in technology over recent years, construction, in common with many areas of industry, has become increasingly complex. It would, therefore, seem to be important to develop and extend the understanding of complexity so that industry in general and in this case the construction industry can work with greater accuracy and efficiency to provide clients with a better service. This paper aims to generate a definition of complexity and a method for its measurement in order to assess its influence upon the accuracy of the quantity surveying profession in UK new build office construction. Quantitative data came from an analysis of twenty projects of varying size and value and qualitative data came from interviews with professional quantity surveyors. The findings highlight the difficulty in defining and measuring project complexity. The correlation between accuracy and complexity was not straightforward, being subjected to many extraneous variables, particularly the impact of project size. Further research is required to develop a better measure of complexity. This is in order to improve the response of quantity surveyors, so that an appropriate level of effort can be applied to individual projects, permitting greater accuracy and enabling better resource planning within the profession.
Resumo:
The Atlantic thermohaline circulation (THC) is an important part of the earth's climate system. Previous research has shown large uncertainties in simulating future changes in this critical system. The simulated THC response to idealized freshwater perturbations and the associated climate changes have been intercompared as an activity of World Climate Research Program (WCRP) Coupled Model Intercomparison Project/Paleo-Modeling Intercomparison Project (CMIP/PMIP) committees. This intercomparison among models ranging from the earth system models of intermediate complexity (EMICs) to the fully coupled atmosphere-ocean general circulation models (AOGCMs) seeks to document and improve understanding of the causes of the wide variations in the modeled THC response. The robustness of particular simulation features has been evaluated across the model results. In response to 0.1-Sv (1 Sv equivalent to 10(6) ms(3) s(-1)) freshwater input in the northern North Atlantic, the multimodel ensemble mean THC weakens by 30% after 100 yr. All models simulate sonic weakening of the THC, but no model simulates a complete shutdown of the THC. The multimodel ensemble indicates that the surface air temperature could present a complex anomaly pattern with cooling south of Greenland and warming over the Barents and Nordic Seas. The Atlantic ITCZ tends to shift southward. In response to 1.0-Sv freshwater input, the THC switches off rapidly in all model simulations. A large cooling occurs over the North Atlantic. The annual mean Atlantic ITCZ moves into the Southern Hemisphere. Models disagree in terms of the reversibility of the THC after its shutdown. In general, the EMICs and AOGCMs obtain similar THC responses and climate changes with more pronounced and sharper patterns in the AOGCMs.
Resumo:
Complexity is integral to planning today. Everyone and everything seem to be interconnected, causality appears ambiguous, unintended consequences are ubiquitous, and information overload is a constant challenge. The nature of complexity, the consequences of it for society, and the ways in which one might confront it, understand it and deal with it in order to allow for the possibility of planning, are issues increasingly demanding analytical attention. One theoretical framework that can potentially assist planners in this regard is Luhmann's theory of autopoiesis. This article uses insights from Luhmann's ideas to understand the nature of complexity and its reduction, thereby redefining issues in planning, and explores the ways in which management of these issues might be observed in actual planning practice via a reinterpreted case study of the People's Planning Campaign in Kerala, India. Overall, this reinterpretation leads to a different understanding of the scope of planning and planning practice, telling a story about complexity and systemic response. It allows the reinterpretation of otherwise familiar phenomena, both highlighting the empirical relevance of the theory and providing new and original insight into particular dynamics of the case study. This not only provides a greater understanding of the dynamics of complexity, but also produces advice to help planners implement structures and processes that can cope with complexity in practice.
Resumo:
Chemotaxis is one of the best characterised signalling systems in biology. It is the mechanism by which bacteria move towards optimal environments and is implicated in biofilm formation, pathogenesis and symbiosis. The properties of the bacterial chemosensory response have been described in detail for the single chemosensory pathway of Escherichia coli. We have characterised the properties of the chemosensory response of Rhodobacter sphaeroides, an -proteobacterium with multiple chemotaxis pathways, under two growth conditions allowing the effects of protein expression levels and cell architecture to be investigated. Using tethered cell assays we measured the responses of the system to step changes in concentration of the attractant propionate and show that, independently of the growth conditions, R. sphaeroides is chemotactic over at least five orders of magnitude and has a sensing profile following Weber’s law. Mathematical modelling also shows that, like E. coli, R. sphaeroides is capable of showing Fold-Change Detection (FCD). Our results indicate that general features of bacterial chemotaxis such as the range and sensitivity of detection, adaptation times, adherence to Weber’s law and the presence of FCD may be integral features of chemotaxis systems in general, regardless of network complexity, protein expression levels and cellular architecture across different species.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
The inclusion of the direct and indirect radiative effects of aerosols in high-resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three-dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing long-wave radiation over West Africa due to a better representation of dust. However, uncertainties in dust optical properties propagate to its direct effect and the subsequent model response. Inclusion of the indirect aerosol effects improves surface radiation biases at the North Slope of Alaska ARM site due to lower cloud amounts in high-latitude clean-air regions. This leads to improved temperature and height forecasts in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short-range forecasts. However, the indirect aerosol effect leads to a strengthening of the low-level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. Regional impacts on the African Easterly Jet (AEJ) are also presented with the large dust loading in the aerosol climatology enhancing of the heat low over West Africa and weakening the AEJ. This study highlights the importance of including a more realistic treatment of aerosol–cloud interactions in global NWP models and the potential for improved global environmental prediction systems through the incorporation of more complex aerosol schemes.
Resumo:
Leaves comprise most of the vegetative body of tank bromeliads and are usually subjected to strong longitudinal gradients. For instance, while the leaf base is in contact with the water accumulated in the tank, the more light-exposed middle and upper leaf sections have no direct access to this water reservoir. Therefore, the present study attempted to investigate whether different leaf portions of Guzmania monostachia, a tank-forming C(3)-CAM bromeliad, play distinct physiological roles in response to water shortage, which is a major abiotic constraint in the epiphytic habitat. Internal and external morphological features, relative water content, pigment composition and the degree of CAM expression were evaluated in basal, middle and apical leaf portions in order to allow the establishment of correlations between the structure and the functional importance of each leaf region. Results indicated that besides marked structural differences, a high level of functional specialization is also present along the leaves of this bromeliad. When the tank water was depleted, the abundant hydrenchyma of basal leaf portions was the main reservoir for maintaining a stable water status in the photosynthetic tissues of the apical region. In contrast, the CAM pathway was intensified specifically in the upper leaf section, which is in agreement with the presence of features more suitable for the occurrence of photosynthesis at this portion. Gas exchange data indicated that internal recycling of respiratory CO(2) accounted for virtually all nighttime acid accumulation, characterizing a typical CAM-idling pathway in the drought-exposed plants. Altogether, these data reveal a remarkable physiological complexity along the leaves of G. monostachia, which might be a key adaptation to the intermittent water supply of the epiphytic niche. (C) 2009 Elsevier GmbH. All rights reserved.
Resumo:
This thesis provides three original contributions to the field of Decision Sciences. The first contribution explores the field of heuristics and biases. New variations of the Cognitive Reflection Test (CRT--a test to measure "the ability or disposition to resist reporting the response that first comes to mind"), are provided. The original CRT (S. Frederick [2005] Journal of Economic Perspectives, v. 19:4, pp.24-42) has items in which the response is immediate--and erroneous. It is shown that by merely varying the numerical parameters of the problems, large deviations in response are found. Not only the final results are affected by the proposed variations, but so is processing fluency. It seems that numbers' magnitudes serve as a cue to activate system-2 type reasoning. The second contribution explores Managerial Algorithmics Theory (M. Moldoveanu [2009] Strategic Management Journal, v. 30, pp. 737-763); an ambitious research program that states that managers display cognitive choices with a "preference towards solving problems of low computational complexity". An empirical test of this hypothesis is conducted, with results showing that this premise is not supported. A number of problems are designed with the intent of testing the predictions from managerial algorithmics against the predictions of cognitive psychology. The results demonstrate (once again) that framing effects profoundly affect choice, and (an original insight) that managers are unable to distinguish computational complexity problem classes. The third contribution explores a new approach to a computationally complex problem in marketing: the shelf space allocation problem (M-H Yang [2001] European Journal of Operational Research, v. 131, pp.107--118). A new representation for a genetic algorithm is developed, and computational experiments demonstrate its feasibility as a practical solution method. These studies lie at the interface of psychology and economics (with bounded rationality and the heuristics and biases programme), psychology, strategy, and computational complexity, and heuristics for computationally hard problems in management science.
Resumo:
We investigated whether or not different degrees of refuge for prey influence the characteristic of functional response exhibited by the spider Nesticodes rufipes on Musca domestica, comparing the inherent ability of N. rufipes to kill individual houseflies in such environments at two distinct time intervals. To investigate these questions, two artificial habitats were elaborated in the laboratory. For 168 h of predator-prey interaction, logistic regression analyses revealed a type 11 functional response, and a significant decrease in prey capture in the highest prey density was observed when habitat complexity was increased. Data from habitat 1 (less complex) presented a greater coefficient of determination than those from habitat 2 (more complex), indicating a higher variation of predation of the latter. For a 24 h period of predator-prey interaction, spiders killed significantly fewer prey in habitat 2 than in habitat 1. Although prey capture did not enable data to fit properly in the random predator equation in this case, predation data from habitat 2 presented a higher variation than data from habitat 1, corroborating results from 168 h of interaction. The high variability observed on data from habitat 2 (more complex habitat) is an interesting result because it reinforces the importance of refuge in promoting spatial heterogeneity, which can affect the extent of predator-prey interactions.