956 resultados para environmental modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives. The purpose of this study was to identify the psychosocial and environmental predictors and the pathways they use to influence calcium intake, physical activity and bone health among adolescent girls. Methods. A secondary data analysis using a cross-sectional and longitudinal study design was implemented to examine the associations of interest. Data from the Incorporating More Physical Activity and Calcium in Teens (IMPACT) study collected in 2001-2003 were utilized for the analyses. IMPACT was a 1½ year nutrition and physical activity intervention study conducted among 718 middle-school girls in central Texas. Hierarchical regression modeling and Structural Equation Modeling (SEM) were used to determine the psychosocial predictors of calcium intake, physical activity and bone health at baseline. Hierarchical regression was used to determine if psychosocial factors at baseline were significant predictors of calcium intake and physical activity at follow-up. Data was adjusted for included BMI, lactose intolerance, ethnicity, menarchal status, intervention and participation in 7th grade PE/athletics. Results. Results of the baseline regression analysis revealed that calcium self-efficacy and milk availability at home were the strongest predictors of calcium intake. Friend engagement in physical activity, physical activity self-efficacy and participation in sports teams were the strongest predictors of physical activity. Finally, physical activity outcome expectations, social support and participation in sports teams were significant predictors of stiffness index at baseline. Results of the baseline SEM path analysis found that outcome expectations and milk availability at home directly influenced calcium intake. Knowledge and calcium self-efficacy indirectly influenced calcium intake with outcome expectations as the mediator. Physical activity self-efficacy and social support had significant direct and indirect influence on physical activity with participation in sports teams as the mediator. Participation in sports teams had a direct effect on both physical activity and stiffness index. Results of regression analysis for baseline predicting follow-up showed that participation in sports teams, self-efficacy, outcome expectations and social support at baseline were significant predictors of physical activity at follow-up. Conclusion. Results of this study reinforce the relevance of addressing both, psychosocial and environmental factors which are critical when developing interventions to improve bone health among adolescent girls. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To understand the validity of d18O proxy records as indicators of past temperature change, a series of experiments was conducted using an atmospheric general circulation model fitted with water isotope tracers (Community Atmosphere Model version 3.0, IsoCAM). A pre-industrial simulation was performed as the control experiment, as well as a simulation with all the boundary conditions set to Last Glacial Maximum (LGM) values. Results from the pre-industrial and LGM simulations were compared to experiments in which the influence of individual boundary conditions (greenhouse gases, ice sheet albedo and topography, sea surface temperature (SST), and orbital parameters) were changed each at a time to assess their individual impact. The experiments were designed in order to analyze the spatial variations of the oxygen isotopic composition of precipitation (d18Oprecip) in response to individual climate factors. The change in topography (due to the change in land ice cover) played a significant role in reducing the surface temperature and d18Oprecip over North America. Exposed shelf areas and the ice sheet albedo reduced the Northern Hemisphere surface temperature and d18Oprecip further. A global mean cooling of 4.1 °C was simulated with combined LGM boundary conditions compared to the control simulation, which was in agreement with previous experiments using the fully coupled Community Climate System Model (CCSM3). Large reductions in d18Oprecip over the LGM ice sheets were strongly linked to the temperature decrease over them. The SST and ice sheet topography changes were responsible for most of the changes in the climate and hence the d18Oprecip distribution among the simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A critical problem in radiocarbon dating is the spatial and temporal variability of marine reservoir ages (MRAs). We assessed the MRA evolution during the last deglaciation by numerical modeling, applying a self-consistent iteration scheme in which an existing radiocarbon chronology (derived by Hughen et al., Quat. Sci. Rev., 25, pp. 3216-3227, 2006) was readjusted by transient, 3-D simulations of marine and atmospheric Delta14C. To estimate the uncertainties regarding the ocean ventilation during the last deglaciation, we considered various ocean overturning scenarios which are based on different climatic background states (PD: modern climate, GS: LGM climate conditions). Minimum and maximum MRAs are included in file 'MRAminmax_21-14kaBP.nc'. Three further files include MRAs according to equilibrium simulations of the preindustrial ocean (file 'C14age_preindustrial.nc'; this is an update of our results published in 2005) and of the glacial ocean (files 'C14age_spinupLGM_GS.nc' and 'C14age_spinupLGM_PD.nc').

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fossil shells of planktonic foraminifera serve as the prime source of information on past changes in surface ocean conditions. Because the population size of planktonic foraminifera species changes throughout the year, the signal preserved in fossil shells is biased towards the conditions when species production was at its maximum. The amplitude of the potential seasonal bias is a function of the magnitude of the seasonal cycle in production. Here we use a planktonic foraminifera model coupled to an ecosystem model to investigate to what degree seasonal variations in production of the species Neogloboquadrina pachyderma may affect paleoceanographic reconstructions during Heinrich Stadial 1 (~18-15 cal. ka B.P.) in the North Atlantic Ocean. The model implies that during Heinrich Stadial 1 the maximum seasonal production occurred later in the year compared to the Last Glacial Maximum (~21-19 cal. ka B.P.) and the pre-industrial era north of 30 ºN. A diagnosis of the model output indicates that this change reflects the sensitivity of the species to the seasonal cycle of sea-ice cover and food supply, which collectively lead to shifts in the modeled maximum production from the Last Glacial Maximum to Heinrich Stadial 1 by up to six months. Assuming equilibrium oxygen isotopic incorporation in the shells of N. pachyderma, the modeled changes in seasonality would result in an underestimation of the actual magnitude of the meltwater isotopic signal recorded by fossil assemblages of N. pachyderma wherever calcification is likely to take place.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In arid countries worldwide, social conflicts between irrigation-based human development and the conservation of aquatic ecosystems are widespread and attract many public debates. This research focuses on the analysis of water and agricultural policies aimed at conserving groundwater resources and maintaining rurallivelihoods in a basin in Spain's central arid region. Intensive groundwater mining for irrigation has caused overexploitation of the basin's large aquifer, the degradation of reputed wetlands and has given rise to notable social conflicts over the years. With the aim of tackling the multifaceted socio-ecological interactions of complex water systems, the methodology used in this study consists in a novel integration into a common platform of an economic optimization model and a hydrology model WEAP (Water Evaluation And Planning system). This robust tool is used to analyze the spatial and temporal effects of different water and agricultural policies under different climate scenarios. It permits the prediction of different climate and policy outcomes across farm types (water stress impacts and adaptation), at basin's level (aquifer recovery), and along the policies’ implementation horizon (short and long run). Results show that the region's current quota-based water policies may contribute to reduce water consumption in the farms but will not be able to recover the aquifer and will inflict income losses to the rural communities. This situation would worsen in case of drought. Economies of scale and technology are evidenced as larger farms with cropping diversification and those equipped with modern irrigation will better adapt to water stress conditions. However, the long-term sustainability of the aquifer and the maintenance of rurallivelihoods will be attained only if additional policy measures are put in place such as the control of illegal abstractions and the establishing of a water bank. Within the policy domain, the research contributes to the new sustainable development strategy of the EU by concluding that, in water-scarce regions, effective integration of water and agricultural policies is essential for achieving the water protection objectives of the EU policies. Therefore, the design and enforcement of well-balanced region-specific polices is a major task faced by policy makers for achieving successful water management that will ensure nature protection and human development at tolerable social costs. From a methodological perspective, this research initiative contributes to better address hydrological questions as well as economic and social issues in complex water and human systems. Its integrated vision provides a valuable illustration to inform water policy and management decisions within contexts of water-related conflicts worldwide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the key scrutiny issues of new coming energy era would be the environmental impact of fusion facilities managing one kg of tritium. The potential change of committed dose regulatory limits together with the implementation of nuclear design principles (As Low as Reasonably achievable - ALARA -, Defense in Depth -D-i-D-) for fusion facilities could strongly impact on the cost of deployment of coming fusion technology. Accurate modeling of environmental tritium transport forms (HT, HTO) for the assessment of fusion facility dosimetric impact in Accidental case appears as of major interest. This paper considers different short-term releases of tritium forms (HT and HTO) to the atmosphere from a potential fusion reactor located in the Mediterranean Basin. This work models in detail the dispersion of tritium forms and dosimetric impact of selected environmental patterns both inland and in-sea using real topography and forecast meteorological data-fields (ECMWF/FLEXPART). We explore specific values of this ratio in different levels and we examine the influence of meteorological conditions in the HTO behavior for 24 hours. For this purpose we have used a tool which consists on a coupled Lagrangian ECMWF/FLEXPART model useful to follow real time releases of tritium at 10, 30 and 60 meters together with hourly observations of wind (and in some cases precipitations) to provide a short-range approximation of tritium cloud behavior. We have assessed inhalation doses. And also HTO/HT ratios in a representative set of cases during winter 2010 and spring 2011 for the 3 air levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling is an essential tool for the development of atmospheric emission abatement measures and air quality plans. Most often these plans are related to urban environments with high emission density and population exposure. However, air quality modeling in urban areas is a rather challenging task. As environmental standards become more stringent (e.g. European Directive 2008/50/EC), more reliable and sophisticated modeling tools are needed to simulate measures and plans that may effectively tackle air quality exceedances, common in large urban areas across Europe, particularly for NO2. This also implies that emission inventories must satisfy a number of conditions such as consistency across the spatial scales involved in the analysis, consistency with the emission inventories used for regulatory purposes and versatility to match the requirements of different air quality and emission projection models. This study reports the modeling activities carried out in Madrid (Spain) highlighting the atmospheric emission inventory development and preparation as an illustrative example of the combination of models and data needed to develop a consistent air quality plan at urban level. These included a series of source apportionment studies to define contributions from the international, national, regional and local sources in order to understand to what extent local authorities can enforce meaningful abatement measures. Moreover, source apportionment studies were conducted in order to define contributions from different sectors and to understand the maximum feasible air quality improvement that can be achieved by reducing emissions from those sectors, thus targeting emission reduction policies to the most relevant activities. Finally, an emission scenario reflecting the effect of such policies was developed and the associated air quality was modeled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sustaining irrigated agriculture to meet food production needs while maintaining aquatic ecosystems is at the heart of many policy debates in various parts of the world, especially in arid and semi-arid areas. Researchers and practitioners are increasingly calling for integrated approaches, and policy-makers are progressively supporting the inclusion of ecological and social aspects in water management programs. This paper contributes to this policy debate by providing an integrated economic-hydrologic modeling framework that captures the socio-economic and environmental effects of various policy initiatives and climate variability. This modeling integration includes a risk-based economic optimization model and a hydrologic water management simulation model that have been specified for the Middle Guadiana basin, a vulnerable drought-prone agro-ecological area with highly regulated river systems in southwest Spain. Namely, two key water policy interventions were investigated: the implementation of minimum environmental flows (supported by the European Water Framework Directive, EU WFD), and a reduction in the legal amount of water delivered for irrigation (planned measure included in the new Guadiana River Basin Management Plan, GRBMP, still under discussion). Results indicate that current patterns of excessive water use for irrigation in the basin may put environmental flow demands at risk, jeopardizing the WFD s goal of restoring the ?good ecological status? of water bodies by 2015. Conflicts between environmental and agricultural water uses will be stressed during prolonged dry episodes, and particularly in summer low-flow periods, when there is an important increase of crop irrigation water requirements. Securing minimum stream flows would entail a substantial reduction in irrigation water use for rice cultivation, which might affect the profitability and economic viability of small rice-growing farms located upstream in the river. The new GRBMP could contribute to balance competing water demands in the basin and to increase economic water productivity, but might not be sufficient to ensure the provision of environmental flows as required by the WFD. A thoroughly revision of the basin s water use concession system for irrigation seems to be needed in order to bring the GRBMP in line with the WFD objectives. Furthermore, the study illustrates that social, economic, institutional, and technological factors, in addition to bio-physical conditions, are important issues to be considered for designing and developing water management strategies. The research initiative presented in this paper demonstrates that hydro-economic models can explicitly integrate all these issues, constituting a valuable tool that could assist policy makers for implementing sustainable irrigation policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In recent years, Spain has implemented a number of air quality control measures that are expected to lead to a future reduction in fine particle concentrations and an ensuing positive impact on public health. Objectives: We aimed to assess the impact on mortality attributable to a reduction in fine particle levels in Spain in 2014 in relation to the estimated level for 2007. Methods: To estimate exposure, we constructed fine particle distribution models for Spain for 2007 (reference scenario) and 2014 (projected scenario) with a spatial resolution of 16x16 km2. In a second step, we used the concentration-response functions proposed by cohort studies carried out in Europe (European Study of Cohorts for Air Pollution Effects and Rome longitudinal cohort) and North America (American Cancer Society cohort, Harvard Six Cities study and Canadian national cohort) to calculate the number of attributable annual deaths corresponding to all causes, all non-accidental causes, ischemic heart disease and lung cancer among persons aged over 25 years (2005-2007 mortality rate data). We examined the effect of the Spanish demographic shift in our analysis using 2007 and 2012 population figures. Results: Our model suggested that there would be a mean overall reduction in fine particle levels of 1mg/m3 by 2014. Taking into account 2007 population data, between 8 and 15 all-cause deaths per 100,000 population could be postponed annually by the expected reduction in fine particle levels. For specific subgroups, estimates varied from 10 to 30 deaths for all non-accidental causes, from 1 to 5 for lung cancer, and from 2 to 6 for ischemic heart disease. The expected burden of preventable mortality would be even higher in the future due to the Spanish population growth. Taking into account the population older than 30 years in 2012, the absolute mortality impact estimate would increase approximately by 18%. Conclusions: Effective implementation of air quality measures in Spain, in a scenario with a short-term projection, would amount to an appreciable decline infine particle concentrations, and this, in turn, would lead to notable health-related benefits. Recent European cohort studies strengthen the evidence of an association between long-term exposure to fine particles and health effects, and could enhance the health impact quantification in Europe. Air quality models can contribute to improved assessment of air pollution health impact estimates, particularly in study areas without air pollution monitoring data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As environmental standards become more stringent (e.g. European Directive 2008/50/EC), more reliable and sophisticated modeling tools are needed to simulate measures and plans that may effectively tackle air quality exceedances, common in large cities across Europe, particularly for NO2. Modeling air quality in urban areas is rather complex since observed concentration values are a consequence of the interaction of multiple sources and processes that involve a wide range of spatial and temporal scales. Besides a consistent and robust multi-scale modeling system, comprehensive and flexible emission inventories are needed. This paper discusses the application of the WRF-SMOKE-CMAQ system to the Madrid city (Spain) to assess the contribution of the main emitting sectors in the region. A detailed emission inventory was compiled for this purpose. This inventory relies on bottom-up methods for the most important sources. It is coupled with the regional traffic model and it makes use of an extensive database of industrial, commercial and residential combustion plants. Less relevant sources are downscaled from national or regional inventories. This paper reports the methodology and main results of the source apportionment study performed to understand the origin of pollution (main sectors and geographical areas) and define clear targets for the abatement strategy. Finally the structure of the air quality monitoring is analyzed and discussed to identify options to improve the monitoring strategy not only in the Madrid city but the whole metropolitan area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PREMISE OF THE STUDY: We conducted environmental niche modeling (ENM) of the Brachypodium distachyon s.l. complex, a model group of two diploid annual grasses ( B. distachyon , B. stacei ) and their derived allotetraploid ( B. hybridum) , native to the circum-Mediterranean region. We (1) investigated the ENMs of the three species in their native range based on present and past climate data; (2) identifi ed potential overlapping niches of the diploids and their hybrid across four Quaternary windows; (3) tested whether speciation was associated with niche divergence/conservatism in the complex species; and (4) tested for the potential of the polyploid outperforming the diploids in the native range. M ETHODS: Geo-referenced data, altitude, and 19 climatic variables were used to construct the ENMs. We used paleoclimate niche models to trace the potential existence of ancestral gene fl ow among the hybridizing species of the complex. KEY RESULTS: Brachypodium distachyon grows in higher, cooler, and wetter places, B. stacei in lower, warmer, and drier places, and B. hybridum in places with intermediate climatic features. Brachypodium hybridum had the largest niche overlap with its parent niches, but a similar distribution range and niche breadth. C ONCLUSIONS: Each species had a unique environmental niche though there were multiple niche overlapping areas for the diploids across time, suggesting the potential existence of several hybrid zones during the Pleistocene and the Holocene. No evidence of niche divergence was found, suggesting that species diversifi cation was not driven by ecological speciation but by evolutionary history, though it could be associated to distinct environmental adaptations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although much of the brain’s functional organization is genetically predetermined, it appears that some noninnate functions can come to depend on dedicated and segregated neural tissue. In this paper, we describe a series of experiments that have investigated the neural development and organization of one such noninnate function: letter recognition. Functional neuroimaging demonstrates that letter and digit recognition depend on different neural substrates in some literate adults. How could the processing of two stimulus categories that are distinguished solely by cultural conventions become segregated in the brain? One possibility is that correlation-based learning in the brain leads to a spatial organization in cortex that reflects the temporal and spatial clustering of letters with letters in the environment. Simulations confirm that environmental co-occurrence does indeed lead to spatial localization in a neural network that uses correlation-based learning. Furthermore, behavioral studies confirm one critical prediction of this co-occurrence hypothesis, namely, that subjects exposed to a visual environment in which letters and digits occur together rather than separately (postal workers who process letters and digits together in Canadian postal codes) do indeed show less behavioral evidence for segregated letter and digit processing.