752 resultados para creativity of relationships
Resumo:
The role of physical activity in the promotion of individual and population health has been well documented in research and policy publications. Significant research activities have produced compelling evidence for the support of the positive association between physical activity and improved health. Despite the knowledge about these public health benefits of physical activity, over half of US adults do not engage in physical activity at levels consistent with public health recommendations. Just as physical inactivity is of significant public health concern in the US, the prevalence of obesity (and its attendant co-morbidities) is also increasing among US adults.^ Research suggests racial and ethnic disparities relevant to physical inactivity and obesity in the US. Various studies have shown more favorable outcomes among non-Hispanic whites when compared to other minority groups as far as physical activity and obesity are concerned. The health disparity issue is especially important because Mexican-Americans who are the fastest growing segment of the US population are disproportionately affected by physical inactivity and obesity by a significant margin (when compared to non-Hispanic whites), so addressing the physical inactivity and obesity issues in this group is of significant public health concern. ^ Although the evidence for health benefits of physical activity is substantial, various research questions remain on the potential motivators for engaging in physical activity. One area of emerging interest is the potential role that the built environment may play in facilitating or inhibiting physical activity.^ In this study, based on an ongoing research project of the Department of Epidemiology at the University of Texas M. D. Anderson Cancer Center, we examined the built environment, measured objectively through the use of geographical information systems (GIS), and its association with physical activity and obesity among a cohort of Mexican- Americans living in Harris County, Texas. The overall study hypothesis was that residing in dense and highly connected neighborhoods with mixed land-use is associated with residents’ increased participation in physical activity and lowered prevalence of obesity. We completed the following specific aims: (1) to generate a land-use profile of the study area and create a “walkability index” measure for each block group within the study area; (2) to compare the level of engagement in physical activity between study participants that reside in high walkability index block groups and those from low walkability block groups; (3) to compare the prevalence of obesity between study participants that reside in high walkability index block groups and those from low walkability block groups. ^ We successfully created the walkability index as a form of objective measure of the built environment for portions of Harris County, Texas. We used a variety of spatial and non-spatial dataset to generate the so called walkability index. We are not aware of previous scholastic work of this kind (construction of walkability index) in the Houston area. Our findings from the assessment of relationships among walkability index, physical activity and obesity suggest the following, that: (1) that attempts to convert people to being walkers through health promotion activities may be much easier in high-walkability neighborhoods, and very hard in low-walkability neighborhoods. Therefore, health promotion activities to get people to be active may require supportive environment, walkable in this case, and may not succeed otherwise; and (2) Overall, among individuals with less education, those in the high walkability index areas may be less obese (extreme) than those in the low walkability area. To the extent that this association can be substantiated, we – public health practitioners, urban designers, and policy experts – we may need to start thinking about ways to “retrofit” existing urban forms to conform to more walkable neighborhoods. Also, in this population especially, there may be the need to focus special attention on those with lower educational attainment.^
Resumo:
Birth defects are the leading cause of infant mortality in the United States and are a major cause of lifetime disability. However, efforts to understand their causes have been hampered by a lack of population-specific data. During 1990–2004, 22 state legislatures responded to this need by proposing birth defects surveillance legislation (BDSL). The contrast between these states and those that did not pass BDSL provides an opportunity to better understand conditions associated with US public health policy diffusion. ^ This study identifies key state-specific determinants that predict: (1) the introduction of birth defects surveillance legislation (BDSL) onto states' formal legislative agenda, and (2) the successful adoption of these laws. Secondary aims were to interpret these findings in a theoretically sound framework and to incorporate evidence from three analytical approaches. ^ The study begins with a comparative case study of Texas and Oregon (states with divergent BDSL outcomes), including a review of historical documentation and content analysis of key informant interviews. After selecting and operationalizing explanatory variables suggested by the case study, Qualitative Comparative Analysis (QCA) was applied to publically available data to describe important patterns of variation among 37 states. Results from logistic regression were compared to determine whether the two methods produced consistent findings. ^ Themes emerging from the comparative case study included differing budgetary conditions and the significance of relationships within policy issue networks. However, the QCA and statistical analysis pointed to the importance of political parties and contrasting societal contexts. Notably, state policies that allow greater access to citizen-driven ballot initiatives were consistently associated with lower likelihood of introducing BDSL. ^ Methodologically, these results indicate that a case study approach, while important for eliciting valuable context-specific detail, may fail to detect the influence of overarching, systemic variables, such as party competition. However, QCA and statistical analyses were limited by a lack of existing data to operationalize policy issue networks, and thus may have downplayed the impact of personal interactions. ^ This study contributes to the field of health policy studies in three ways. First, it emphasizes the importance of collegial and consistent relationships among policy issue network members. Second, it calls attention to political party systems in predicting policy outcomes. Finally, a novel approach to interpreting state data in a theoretically significant manner (QCA) has been demonstrated.^
Resumo:
Samples obtained in Hole 803D for shipboard determination of index properties were analyzed to determine their microfossil constituents. The resulting data are compared to shipboard-measured physical properties data to assess the relationships between small-scale fluctuations in physical properties and microfossil content and preservation. The establishment of relationships involving index properties of these highly calcareous sediments is difficult because of the role of intraparticle porosity. Relationships were observed between calculated interparticle porosity and microfossil content. Impedance, calculated using bulk density based on interparticle porosity, exhibits an increase with increasing grain size. Variations in the coarse fraction constituents appear to exert more control over physical properties than variations in the fine-fraction constituents, although the fine fraction make up greater than 85% of the samples by weight.
Resumo:
Recent Pan-Arctic shrub expansion has been interpreted as a response to a warmer climate. However, herbivores can also influence the abundance of shrubs in arctic ecosystems. We addressed these alternative explanations by following the changes in plant community composition during the last 10 years in permanent plots inside and outside exclosures with different mesh sizes that exclude either only reindeer or all mammalian herbivores including voles and lemmings. The exclosures were replicated at three forest and tundra sites at four different locations along a climatic gradient (oceanic to continental) in northern Fennoscandia. Since the last 10 years have been exceptionally warm, we could study how warming has influenced the vegetation in different grazing treatments. Our results show that the abundance of the dominant shrub, Betula nana, has increased during the last decade, but that the increase was more pronounced when herbivores were excluded. Reindeer have the largest effect on shrubs in tundra, while voles and lemmings have a larger effect in the forest. The positive relationship between annual mean temperature and shrub growth in the absence of herbivores and the lack of relationships in grazed controls is another indication that shrub abundance is controlled by an interaction between herbivores and climate. In addition to their effects on taller shrubs (> 0.3 m), reindeer reduced the abundance of lichens, whereas microtine rodents reduced the abundance of dwarf shrubs (< 0.3 m) and mosses. In contrast to short-term responses, competitive interactions between dwarf shrubs and lichens were evident in the long term. These results show that herbivores have to be considered in order to understand how a changing climate will influence tundra ecosystems.
Resumo:
Los modelos de simulación de cultivos permiten analizar varias combinaciones de laboreo-rotación y explorar escenarios de manejo. El modelo DSSAT fue evaluado bajo condiciones de secano en un experimento de campo de 16 años en la semiárida España central. Se evaluó el efecto del sistema de laboreo y las rotaciones basadas en cereales de invierno, en el rendimiento del cultivo y la calidad del suelo. Los modelos CERES y CROPGRO se utilizaron para simular el crecimiento y rendimiento del cultivo, mientras que el modelo DSSAT CENTURY se utilizó en las simulaciones de SOC y SN. Tanto las observaciones de campo como las simulaciones con CERES-Barley, mostraron que el rendimiento en grano de la cebada era mas bajo para el cereal continuo (BB) que para las rotaciones de veza (VB) y barbecho (FB) en ambos sistemas de laboreo. El modelo predijo más nitrógeno disponible en el laboreo convencional (CT) que en el no laboreo (NT) conduciendo a un mayor rendimiento en el CT. El SOC y el SN en la capa superficial del suelo, fueron mayores en NT que en CT, y disminuyeron con la profundidad en los valores tanto observados como simulados. Las mejores combinaciones para las condiciones de secano estudiadas fueron CT-VB y CT-FB, pero CT presentó menor contenido en SN y SOC que NT. El efecto beneficioso del NT en SOC y SN bajo condiciones Mediterráneas semiáridas puede ser identificado por observaciones de campo y por simulaciones de modelos de cultivos. La simulación del balance de agua en sistemas de cultivo es una herramienta útil para estudiar como el agua puede ser utilizado eficientemente. La comparación del balance de agua de DSSAT , con una simple aproximación “tipping bucket”, con el modelo WAVE más mecanicista, el cual integra la ecuación de Richard , es un potente método para valorar el funcionamiento del modelo. Los parámetros de suelo fueron calibrados usando el método de optimización global Simulated Annealing (SA). Un lisímetro continuo de pesada en suelo desnudo suministró los valores observados de drenaje y evapotranspiración (ET) mientras que el contenido de agua en el suelo (SW) fue suministrado por sensores de capacitancia. Ambos modelos funcionaron bien después de la optimización de los parámetros de suelo con SA, simulando el balance de agua en el suelo para el período de calibración. Para el período de validación, los modelos optimizados predijeron bien el contenido de agua en el suelo y la evaporación del suelo a lo largo del tiempo. Sin embargo, el drenaje fue predicho mejor con WAVE que con DSSAT, el cual presentó mayores errores en los valores acumulados. Esto podría ser debido a la naturaleza mecanicista de WAVE frente a la naturaleza más funcional de DSSAT. Los buenos resultados de WAVE indican que, después de la calibración, este puede ser utilizado como "benchmark" para otros modelos para periodos en los que no haya medidas de campo del drenaje. El funcionamiento de DSSAT-CENTURY en la simulación de SOC y N depende fuertemente del proceso de inicialización. Se propuso como método alternativo (Met.2) la inicialización de las fracciones de SOC a partir de medidas de mineralización aparente del suelo (Napmin). El Met.2 se comparó con el método de inicialización de Basso et al. (2011) (Met.1), aplicando ambos métodos a un experimento de campo de 4 años en un área en regadío de España central. Nmin y Napmin fueron sobreestimados con el Met.1, ya que la fracción estable obtenida (SOC3) en las capas superficiales del suelo fue más baja que con Met.2. El N lixiviado simulado fue similar en los dos métodos, con buenos resultados en los tratamientos de barbecho y cebada. El Met.1 subestimó el SOC en la capa superficial del suelo cuando se comparó con una serie observada de 12 años. El crecimiento y rendimiento del cultivo fueron adecuadamente simulados con ambos métodos, pero el N en la parte aérea de la planta y en el grano fueron sobreestimados con el Met.1. Los resultados variaron significativamente con las fracciones iniciales de SOC, resaltando la importancia del método de inicialización. El Met.2 ofrece una alternativa para la inicialización del modelo CENTURY, mejorando la simulación de procesos de N en el suelo. La continua emergencia de nuevas variedades de híbridos modernos de maíz limita la aplicación de modelos de simulación de cultivos, ya que estos nuevos híbridos necesitan ser calibrados en el campo para ser adecuados para su uso en los modelos. El desarrollo de relaciones basadas en la duración del ciclo, simplificaría los requerimientos de calibración facilitando la rápida incorporación de nuevos cultivares en DSSAT. Seis híbridos de maiz (FAO 300 hasta FAO 700) fueron cultivados en un experimento de campo de dos años en un área semiárida de regadío en España central. Los coeficientes genéticos fueron obtenidos secuencialmente, comenzando con los parámetros de desarrollo fenológico (P1, P2, P5 and PHINT), seguido de los parámetros de crecimiento del cultivo (G2 and G3). Se continuó el procedimiento hasta que la salida de las simulaciones estuvo en concordancia con las observaciones fenológicas de campo. Después de la calibración, los parámetros simulados se ajustaron bien a los parámetros observados, con bajos RMSE en todos los casos. Los P1 y P5 calibrados, incrementaron con la duración del ciclo. P1 fue una función lineal del tiempo térmico (TT) desde emergencia hasta floración y P5 estuvo linealmente relacionada con el TT desde floración a madurez. No hubo diferencias significativas en PHINT entre híbridos de FAO-500 a 700 , ya que tuvieron un número de hojas similar. Como los coeficientes fenológicos estuvieron directamente relacionados con la duración del ciclo, sería posible desarrollar rangos y correlaciones que permitan estimar dichos coeficientes a partir de la clasificación del ciclo. ABSTRACT Crop simulation models allow analyzing various tillage-rotation combinations and exploring management scenarios. DSSAT model was tested under rainfed conditions in a 16-year field experiment in semiarid central Spain. The effect of tillage system and winter cereal-based rotations on the crop yield and soil quality was evaluated. The CERES and CROPGRO models were used to simulate crop growth and yield, while the DSSAT CENTURY was used in the SOC and SN simulations. Both field observations and CERES-Barley simulations, showed that barley grain yield was lower for continuous cereal (BB) than for vetch (VB) and fallow (FB) rotations for both tillage systems. The model predicted higher nitrogen availability in the conventional tillage (CT) than in the no tillage (NT) leading to a higher yield in the CT. The SOC and SN in the top layer, were higher in NT than in CT, and decreased with depth in both simulated and observed values. The best combinations for the dry land conditions studied were CT-VB and CT-FB, but CT presented lower SN and SOC content than NT. The beneficial effect of NT on SOC and SN under semiarid Mediterranean conditions can be identified by field observations and by crop model simulations. The simulation of the water balance in cropping systems is a useful tool to study how water can be used efficiently. The comparison of DSSAT soil water balance, with a simpler “tipping bucket” approach, with the more mechanistic WAVE model, which integrates Richard’s equation, is a powerful method to assess model performance. The soil parameters were calibrated by using the Simulated Annealing (SA) global optimizing method. A continuous weighing lysimeter in a bare fallow provided the observed values of drainage and evapotranspiration (ET) while soil water content (SW) was supplied by capacitance sensors. Both models performed well after optimizing soil parameters with SA, simulating the soil water balance components for the calibrated period. For the validation period, the optimized models predicted well soil water content and soil evaporation over time. However, drainage was predicted better by WAVE than by DSSAT, which presented larger errors in the cumulative values. That could be due to the mechanistic nature of WAVE against the more functional nature of DSSAT. The good results from WAVE indicate that, after calibration, it could be used as benchmark for other models for periods when no drainage field measurements are available. The performance of DSSAT-CENTURY when simulating SOC and N strongly depends on the initialization process. Initialization of the SOC pools from apparent soil N mineralization (Napmin) measurements was proposed as alternative method (Met.2). Method 2 was compared to the Basso et al. (2011) initialization method (Met.1), by applying both methods to a 4-year field experiment in a irrigated area of central Spain. Nmin and Napmin were overestimated by Met.1, since the obtained stable pool (SOC3) in the upper layers was lower than from Met.2. Simulated N leaching was similar for both methods, with good results in fallow and barley treatments. Method 1 underestimated topsoil SOC when compared with a 12-year observed serial. Crop growth and yield were properly simulated by both methods, but N in shoots and grain were overestimated by Met.1. Results varied significantly with the initial SOC pools, highlighting the importance of the initialization procedure. Method 2 offers an alternative to initialize the CENTURY model, enhancing the simulation of soil N processes. The continuous emergence of new varieties of modern maize hybrids limits the application of crop simulation models, since these new hybrids should be calibrated in the field to be suitable for model use. The development of relationships based on the cycle duration, would simplify the calibration requirements facilitating the rapid incorporation of new cultivars into DSSAT. Six maize hybrids (FAO 300 through FAO 700) were grown in a 2-year field experiment in a semiarid irrigated area of central Spain. Genetic coefficients were obtained sequentially, starting with the phenological development parameters (P1, P2, P5 and PHINT), followed by the crop growth parameters (G2 and G3). The procedure was continued until the simulated outputs were in good agreement with the field phenological observations. After calibration, simulated parameters matched observed parameters well, with low RMSE in most cases. The calibrated P1 and P5 increased with the duration of the cycle. P1 was a linear function of the thermal time (TT) from emergence to silking and P5 was linearly related with the TT from silking to maturity . There were no significant differences in PHINT between hybrids from FAO-500 to 700 , as they had similar leaf number. Since phenological coefficients were directly related with the cycle duration, it would be possible to develop ranges and correlations which allow to estimate such coefficients from the cycle classification.
Resumo:
The success of an aquaculture breeding program critically depends on the way in which the base population of breeders is constructed since all the genetic variability for the traits included originally in the breeding goal as well as those to be included in the future is contained in the initial founders. Traditionally, base populations were created from a number of wild strains by sampling equal numbers from each strain. However, for some aquaculture species improved strains are already available and, therefore, mean phenotypic values for economically important traits can be used as a criterion to optimize the sampling when creating base populations. Also, the increasing availability of genome-wide genotype information in aquaculture species could help to refine the estimation of relationships within and between candidate strains and, thus, to optimize the percentage of individuals to be sampled from each strain. This study explores the advantages of using phenotypic and genome-wide information when constructing base populations for aquaculture breeding programs in terms of initial and subsequent trait performance and genetic diversity level. Results show that a compromise solution between diversity and performance can be found when creating base populations. Up to 6% higher levels of phenotypic performance can be achieved at the same level of global diversity in the base population by optimizing the selection of breeders instead of sampling equal numbers from each strain. The higher performance observed in the base population persisted during 10 generations of phenotypic selection applied in the subsequent breeding program.
Resumo:
Se aborda como objetivo una reflexión operativa sobre los actos creativos en el proyecto de arquitectura investigando los procedimientos que intervienen recurrentemente en los procesos del aprendizaje y proyectación. Aprendizaje entendido como actitud ininterrumpida en el discurso del creador, estado inacabado, en constante evolución, continuo, no suscrito al momento particular, como situación connatural al hecho creativo. El marco epistemológico de la Creatividad y sus técnicas asociadas en constante aplicación en otras disciplinas se desvela como un sustrato gnoseológico referencial para el entendimiento de los procesos de génesis y producción del proyecto y su aprendizaje. Se inscribe la investigación de la tesis en una doble línea de pensamiento lógico-racional y lógico-intuitivo, mediante inferencias de naturaleza deductiva, inductiva y abductiva. Se busca suscitar un interés por la creatividad que contribuya a extender el campo de investigación sobre cómo se genera y se produce la arquitectura y su aplicación al aprendizaje. Proponemos la elaboración de una cartografía taxonómica de procedimientos creativos en el proyectar, de tal manera que nos conduzca hacia la enunciación de una Metaheurística de la Creatividad. Metaheurística que contenga el mapa operativo de Procedimientos y de sus Metaprincipios de aplicación a distintos entornos y problemas, con capacidad de respuesta adaptativa en todos los estados divergentes del proceso. Cartografía, en definitiva, de relaciones más que de sistematizaciones. Este mapa creativo estará formado por 9 Procedimientos resultado de 9 aproximaciones que constituyen 9 Lógicas de acción y razonamiento, en unas condiciones de campo hologramáticas. Enfocadas hacia una nueva actitud creativa, desprejuiciada, atenta a los flujos transdisciplinares, regida indistintamente por el azar y el rigor, expansiva y condensadora, múltiple y poliédrica, abierta e inconclusa, lúdica y frívola, automática e impredecible. Actitud creativa que actúa con lógicas procedimentales relacionales, siempre en constante redefinición, alejadas de toda teorización, sin constituir un meta-relato, más como gestión de información que como dispositivo disciplinar: abductiva, analógica, sinéctica, metafórica, difusa, azarosa, suspendida, divergente y des_aprendida. ABSTRACT Abstract: An operative reflection is approached as aim on the creative acts in the project of architecture, investigating the procedures involved recursively in the processes of learning and project. Learning understood as uninterrupted attitude in the speech of the creator, in unfinished condition, in constant evolution, continuous, not signed to the particular moment, as inherent to the creative fact. The epistemological framework of the Creativity and its techniques associated in constant application in other disciplines is revealed as a referential gnoseologic substratum for the understanding of the processes of genesis and production of the project and his learning. The investigation of the thesis registers in a double line of logical - rational and logical - intuitive thought, by means of inferences of deductive, inductive and abductive nature. One seeks to arouse an interest in the creativity that will help to extend the field of investigation on how it is generated and produces the architecture and its application to the learning. We propose the elaboration of a taxonomic mapping creative procedures in the project, in such a way to lead us towards the enunciation of a Metaheuristic of creativity. Metaheuristic that contains containing the operative map of Procedures and their Metaprinciples of application to different environments and problems, with capacity of adaptive response in all the divergent conditions of the process. Cartography, definitively, of relationships rather than of systematizings. The map of procedures will be formed by 9 procedures as result of 9 approaches that constitute 9 logics of action and reasoning, in hologramatics conditions of field. Focused on a new creative attitude, ideologically unbiased, attentive to transdisciplinary flows, governed either by random and rigor, expansive and condenser, multiple and multifaceted, open and unfinished, playful and frivolous, automatic and unpredictable. Creative attitude that acts with procedural relational logics, always in constant redefinition, away from all theorizing, without being a meta-relate, more like information that discipline as device management: abductive, analogical, synectical, metaphorical, diffuse, randomness, suspended, divergent and mis_learnt.
Resumo:
In humans declarative or explicit memory is supported by the hippocampus and related structures of the medial temporal lobe working in concert with the cerebral cortex. This paper reviews our progress in developing an animal model for studies of cortical–hippocampal interactions in memory processing. Our findings support the view that the cortex maintains various forms of memory representation and that hippocampal structures extend the persistence and mediate the organization of these codings. Specifically, the parahippocampal region, through direct and reciprocal interconnections with the cortex, is sufficient to support the convergence and extended persistence of cortical codings. The hippocampus itself is critical to the organization cortical representations in terms of relationships among items in memory and in the flexible memory expression that is the hallmark of declarative memory.
Resumo:
Contracting to provide technological information (TI) is a significant challenge. TI is an unusual commodity in five ways. (i) TI is difficult to count and value; conventional indicators, such as patents and citations, hardly indicate value. TI is often sold at different prices to different parties. (ii) To value TI, it may be necessary to “give away the secret.” This danger, despite nondisclosure agreements, inhibits efforts to market TI. (iii) To prove its value, TI is often bundled into complete products, such as a computer chip or pharmaceutical product. Efficient exchange, by contrast, would involve merely the raw information. (iv) Sellers’ superior knowledge about TI’s value make buyers wary of overpaying. (v) Inefficient contracts are often designed to secure rents from TI. For example, licensing agreements charge more than marginal cost. These contracting difficulties affect the way TI is produced, encouraging self-reliance. This should be an advantage to large firms. However, small research and development firms spend more per employee than large firms, and nonprofit universities are major producers. Networks of organizational relationships, particularly between universities and industry, are critical in transmitting TI. Implicit barter—money for guidance—is common. Property rights for TI are hard to establish. Patents, quite suitable for better mousetraps, are inadequate for an era when we design better mice. Much TI is not patented, and what is patented sets fuzzy demarcations. New organizational forms are a promising approach to contracting difficulties for TI. Webs of relationships, formal and informal, involving universities, start-up firms, corporate giants, and venture capitalists play a major role in facilitating the production and spread of TI.
Resumo:
The paradigm that mangroves are critical for sustaining production in coastal fisheries is widely accepted, but empirical evidence has been tenuous. This study showed that links between mangrove extent and coastal fisheries production could be detected for some species at a broad regional scale (1000s of kilometres) on the east coast of Queensland, Australia. The relationships between catch-per-unit-effort for different commercially caught species in four fisheries (trawl, line, net and pot fisheries) and mangrove characteristics, estimated from Landsat images were examined using multiple regression analyses. The species were categorised into three groups based on information on their life history characteristics, namely mangrove-related species (banana prawns Penaeus merguiensis, mud crabs Scylla serrata and barramundi Lates calcarifer), estuarine species (tiger prawns Penaeus esculentus and Penaeus semisulcatus, blue swimmer crabs Portunus pelagicus and blue threadfin Eleutheronema tetradactylum) and offshore species (coral trout Plectropomus spp.). For the mangrove-related species, mangrove characteristics such as area and perimeter accounted for most of the variation in the model; for the non-mangrove estuarine species, latitude was the dominant parameter but some mangrove characteristics (e.g. mangrove perimeter) also made significant contributions to the models. In contrast, for the offshore species, latitude was the dominant variable, with no contribution from mangrove characteristics. This study also identified that finer scale spatial data for the fisheries, to enable catch information to be attributed to a particular catchment, would help to improve our understanding of relationships between mangroves and fisheries production. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
Euastacus crayfish are endemic to freshwater ecosystems of the eastern coast of Australia. While recent evolutionary studies have focused on a few of these species, here we provide a comprehensive phylogenetic estimate of relationships among the species within the genus. We sequenced three mitochondrial gene regions (COI, 16S, and 12S) and one nuclear region (28S) from 40 species of the genus Euastacus, as well as one undescribed species. Using these data, we estimated the phylogenetic relationships within the genus using maximum-likelihood, parsimony, and Bayesian Markov Chain Monte Carlo analyses. Using Bayes factors to test different model hypotheses, we found that the best phylogeny supports monophyletic groupings of all but two recognized species and suggests a widespread ancestor that diverged by vicariance. We also show that Eitastacus and Astacopsis are most likely monophyletic sister genera. We use the resulting phylogeny as a framework to test biogeographic hypotheses relating to the diversification of the genus. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Participation in leisure-time activities, self-concept perceptions and individual dispositional goal orientations were examined as mediators of relationships between physical coordination and self-evaluations of life satisfaction and general self-concept for 173 boys aged 10-13 years. Participants completed seven-day activity diaries and 12-month retrospective recall questionnaires recording participation in leisure-time activities. Self-report measures of self-concept, global life satisfaction and dispositional goal orientations were also completed. Results showed that boys with moderate to severe physical coordination difficulties had significantly lower self-concept perceptions of physical ability and appearance, peer and parent relations and general self-concept, as well as lower life satisfaction than boys with medium to high levels of physical coordination. The relationships between boys' physical coordination and their self-perceptions of life satisfaction and general self-concept were significantly influenced by individual self-concept appraisals of physical ability and appearance, peer and parent relations. Adopting task-oriented goals was found to positively change the relationship between physical coordination and both general self-concept and life satisfaction. Team sport participation positively mediated the relationship between physical coordination and life satisfaction. The potential for team sport participation and adoption of task-oriented goals to influence life satisfaction for boys with differing levels of physical coordination was discussed. (c) 2006 Elsevier B.V.. All rights reserved.
Resumo:
One hundred and twelve university students completed 7 tests assessing word-reading accuracy, print exposure, phonological sensitivity, phonological coding and knowledge of English morphology as predictors of spelling accuracy. Together the tests accounted for 71% of the variance in spelling, with phonological skills and morphological knowledge emerging as strong predictors of spelling accuracy for words with both regular and irregular sound-spelling correspondences. The pattern of relationships was consistent with a model in which, as a function of the learning opportunities that are provided by reading experience, phonological skills promote the learning of individual word orthographies and structural relationships among words.
Resumo:
Anyone who looks at the title of this special issue will agree that the intent behind the preparation of this volume was ambitious: to predict and discuss “The Future of Manufacturing”. Will manufacturing be important in the future? Even though some sceptics might say not, and put on the table some old familiar arguments, we would strongly disagree. To bring subsidies for the argument we issued the call-for-papers for this special issue of Journal of Manufacturing Technology Management, fully aware of the size of the challenge in our hands. But we strongly believed that the enterprise would be worthwhile. The point of departure is the ongoing debate concerning the meaning and content of manufacturing. The easily visualised internal activity of using tangible resources to make physical products in factories is no longer a viable way to characterise manufacturing. It is now a more loosely defined concept concerning the organisation and management of open, interdependent, systems for delivering goods and services, tangible and intangible, to diverse types of markets. Interestingly, Wickham Skinner is the most cited author in this special issue of JMTM. He provides the departure point of several articles because his vision and insights have guided and inspired researchers in production and operations management from the late 1960s until today. However, the picture that we draw after looking at the contributions in this special issue is intrinsically distinct, much more dynamic, and complex. Seven articles address the following research themes: 1.new patterns of organisation, where the boundaries of firms become blurred and the role of the firm in the production system as well as that of manufacturing within the firm become contingent; 2.new approaches to strategic decision-making in markets characterised by turbulence and weak signals at the customer interface; 3.new challenges in strategic and operational decisions due to changes in the profile of the workforce; 4.new global players, especially China, modifying the manufacturing landscape; and 5.new techniques, methods and tools that are being made feasible through progress in new technological domains. Of course, many other important dimensions could be studied, but these themes are representative of current changes and future challenges. Three articles look at the first theme: organisational evolution of production and operations in firms and networks. Karlsson's and Skold's article represent one further step in their efforts to characterise “the extraprise”. In the article, they advance the construction of a new framework, based on “the network perspective” by defining the formal elements which compose it and exploring the meaning of different types of relationships. The way in which “actors, resources and activities” are conceptualised extends the existing boundaries of analytical thinking in operations management and open new avenues for research, teaching and practice. The higher level of abstraction, an intrinsic feature of the framework, is associated to the increasing degree of complexity that characterises decisions related to strategy and implementation in the manufacturing and operations area, a feature that is expected to become more and more pervasive as time proceeds. Riis, Johansen, Englyst and Sorensen have also based their article on their previous work, which in this case is on “the interactive firm”. They advance new propositions on strategic roles of manufacturing and discuss why the configuration of strategic manufacturing roles, at the level of the network, will become a key issue and how the indirect strategic roles of manufacturing will become increasingly important. Additionally, by considering that value chains will become value webs, they predict that shifts in strategic manufacturing roles will look like a sequence of moves similar to a game of chess. Then, lastly under the first theme, Fleury and Fleury develop a conceptual framework for the study of production systems in general derived from field research in the telecommunications industry, here considered a prototype of the coming information society and knowledge economy. They propose a new typology of firms which, on certain dimensions, complements the propositions found in the other two articles. Their telecoms-based framework (TbF) comprises six types of companies characterised by distinct profiles of organisational competences, which interact according to specific patterns of relationships, thus creating distinct configurations of production networks. The second theme is addressed by Kyläheiko and SandstroÍm in their article “Strategic options based framework for management of dynamic capabilities in manufacturing firms”. They propose a new approach to strategic decision-making in markets characterised by turbulence and weak signals at the customer interface. Their framework for a manufacturing firm in the digital age leads to active asset selection (strategic investments in both tangible and intangible assets) and efficient orchestrating of the global value net in “thin” intangible asset markets. The framework consists of five steps based on Porter's five-forces model, the resources-based view, complemented by means of the concepts of strategic options and related flexibility issues. Thun, GroÍssler and Miczka's contribution to the third theme brings the human dimension to the debate regarding the future of manufacturing. Their article focuses on the challenges brought to management by the ageing of workers in Germany but, in the arguments that are raised, the future challenges associated to workers and work organisation in every production system become visible and relevant. An interesting point in the approach adopted by the authors is that not only the factual problems and solutions are taken into account but the perception of the managers is brought into the picture. China cannot be absent in the discussion of the future of manufacturing. Therefore, within the fourth theme, Vaidya, Bennett and Liu provide the evidence of the gradual improvement of Chinese companies in the medium and high-tech sectors, by using the revealed comparative advantage (RCA) analysis. The Chinese evolution is shown to be based on capabilities developed through combining international technology transfer and indigenous learning. The main implication for the Western companies is the need to take account of the accelerated rhythm of capability development in China. For other developing countries China's case provides lessons of great importance. Finally, under the fifth theme, Kuehnle's article: “Post mass production paradigm (PMPP) trajectories” provides a futuristic scenario of what is already around us and might become prevalent in the future. It takes a very intensive look at a whole set of dimensions that are affecting manufacturing now, and will influence manufacturing in the future, ranging from the application of ICT to the need for social transparency. In summary, this special issue of JMTM presents a brief, but undisputable, demonstration of the possible richness of manufacturing in the future. Indeed, we could even say that manufacturing has no future if we only stick to the past perspectives. Embracing the new is not easy. The new configurations of production systems, the distributed and complementary roles to be performed by distinct types of companies in diversified networked structures, leveraged by the new emergent technologies and associated the new challenges for managing people, are all themes that are carriers of the future. The Guest Editors of this special issue on the future of manufacturing are strongly convinced that their undertaking has been worthwhile.
Resumo:
In three experiments, we manipulated participants' perceived numerical status and compared the originality and creativity of arguments generated by members of numerical minorities and majorities. Independent judges, blind to experimental conditions, rated participants' written arguments. In Studies 1 and 2, we found that participants assigned to a numerical minority generated more original arguments when advocating their own position than did numerical majorities. In Study 3, an equal-factions control group was included in the design, and all participants were instructed to argue for a counter-attitudinal position. Those in the numerical minority generated more creative arguments than those in both the majority and equal-factions conditions, but not stronger arguments. We propose cognitive and social processes that may underlie our obtained effects and discuss implications for minority influence research.