903 resultados para NETWORK OF MEANINGS
Resumo:
The crosstalk between fibroblasts and keratinocytes is a vital component of the wound healing process, and involves the activity of a number of growth factors and cytokines. In this work, we develop a mathematical model of this crosstalk in order to elucidate the effects of these interactions on the regeneration of collagen in a wound that heals by second intention. We consider the role of four components that strongly affect this process: transforming growth factor-beta, platelet-derived growth factor, interleukin-1 and keratinocyte growth factor. The impact of this network of interactions on the degradation of an initial fibrin clot, as well as its subsequent replacement by a matrix that is mainly comprised of collagen, is described through an eight-component system of nonlinear partial differential equations. Numerical results, obtained in a two-dimensional domain, highlight key aspects of this multifarious process such as reepithelialisation. The model is shown to reproduce many of the important features of normal wound healing. In addition, we use the model to simulate the treatment of two pathological cases: chronic hypoxia, which can lead to chronic wounds; and prolonged inflammation, which has been shown to lead to hypertrophic scarring. We find that our model predictions are qualitatively in agreement with previously reported observations, and provide an alternative pathway for gaining insight into this complex biological process.
Resumo:
The objective of this exploratory study is to investigate the main drivers that enhance and inhibit the export performance of Chilean wineries. Based on survey data collected from Chilean wineries, the findings of this study suggest that the main constraints within the Chilean wineries in developing exports is the lack of financial resources, limited quantities of stocks for market expansion, management’s lack of knowledge and experience, and the high cost of travelling and participating in trade shows. The main drivers of wine export performance according to the respondents are high quality of the wines, well established network of international distributors, and marketing skills. The major inhibitors of developing wine exports are exchange rate variability, problems in selecting a reliable international distributor, and limited government support to promote wine exports. This study also shows that export managers of Chilean wineries have high educational levels and have international experience. The findings have important implications for export development efforts of both governments and managers.
Resumo:
Creativity plays an increasingly important role in our personal, social, educational, and community lives. For adolescents, creativity can enable self-expression, be a means of pushing boundaries, and assist learning, achievement, and completion of everyday tasks. Moreover, adolescents who demonstrate creativity can potentially enhance their capacity to face unknown future challenges, address mounting social and ecological issues in our global society, and improve their career opportunities and contribution to the economy. For these reasons, creativity is an essential capacity for young people in their present and future, and is highlighted as a priority in current educational policy nationally and internationally. Despite growing recognition of creativity’s importance and attention to creativity in research, the creative experience from the perspectives of the creators themselves and the creativity of adolescents are neglected fields of study. Hence, this research investigated adolescents’ self-reported experiences of creativity to improve understandings of their creative processes and manifestations, and how these can be supported or inhibited. Although some aspects of creativity have been extensively researched, there were no comprehensive, multidisciplinary theoretical frameworks of adolescent creativity to provide a foundation for this study. Therefore, a grounded theory methodology was adopted for the purpose of constructing a new theory to describe and explain adolescents’ creativity in a range of domains. The study’s constructivist-interpretivist perspective viewed the data and findings as interpretations of adolescents’ creative experiences, co-constructed by the participants and the researcher. The research was conducted in two academically selective high schools in Australia: one arts school, and one science, mathematics, and technology school. Twenty adolescent participants (10 from each school) were selected using theoretical sampling. Data were collected via focus groups, individual interviews, an online discussion forum, and email communications. Grounded theory methods informed a process of concurrent data collection and analysis; each iteration of analysis informed subsequent data collection. Findings portray creativity as it was perceived and experienced by participants, presented in a Grounded Theory of Adolescent Creativity. The Grounded Theory of Adolescent Creativity comprises a core category, Perceiving and Pursuing Novelty: Not the Norm, which linked all findings in the study. This core category explains how creativity involved adolescents perceiving stimuli and experiences differently, approaching tasks or life unconventionally, and pursuing novel ideas to create outcomes that are not the norm when compared with outcomes by peers. Elaboration of the core category is provided by the major categories of findings. That is, adolescent creativity entailed utilising a network of Sub-Processes of Creativity, using strategies for Managing Constraints and Challenges, and drawing on different Approaches to Creativity – adaptation, transfer, synthesis, and genesis – to apply the sub-processes and produce creative outcomes. Potentially, there were Effects of Creativity on Creators and Audiences, depending on the adolescent and the task. Three Types of Creativity were identified as the manifestations of the creative process: creative personal expression, creative boundary pushing, and creative task achievement. Interactions among adolescents’ dispositions and environments were influential in their creativity. Patterns and variations of these interactions revealed a framework of four Contexts for Creativity that offered different levels of support for creativity: high creative disposition–supportive environment; high creative disposition–inhibiting environment; low creative disposition–supportive environment; and low creative disposition–inhibiting environment. These contexts represent dimensional ranges of how dispositions and environments supported or inhibited creativity, and reveal that the optimal context for creativity differed depending on the adolescent, task, domain, and environment. This study makes four main contributions, which have methodological and theoretical implications for researchers, as well as practical implications for adolescents, parents, teachers, policy and curriculum developers, and other interested stakeholders who aim to foster the creativity of adolescents. First, this study contributes methodologically through its constructivist-interpretivist grounded theory methodology combining the grounded theory approaches of Corbin and Strauss (2008) and Charmaz (2006). Innovative data collection was also demonstrated through integration of data from online and face-to-face interactions with adolescents, within the grounded theory design. These methodological contributions have broad applicability to researchers examining complex constructs and processes, and with populations who integrate multimedia as a natural form of communication. Second, applicable to creativity in diverse domains, the Grounded Theory of Adolescent Creativity supports a hybrid view of creativity as both domain-general and domain-specific. A third major contribution was identification of a new form of creativity, educational creativity (ed-c), which categorises creativity for learning or achievement within the constraints of formal educational contexts. These theoretical contributions inform further research about creativity in different domains or multidisciplinary areas, and with populations engaged in formal education. However, the key contribution of this research is that it presents an original Theory and Model of Adolescent Creativity to explain the complex, multifaceted phenomenon of adolescents’ creative experiences.
Resumo:
Mesenchymal stem cells (MSCs) are undifferentiated, multi-potent stem cells with the ability to renew. They can differentiate into many types of terminal cells, such as osteoblasts, chondrocytes, adipocytes, myocytes, and neurons. These cells have been applied in tissue engineering as the main cell type to regenerate new tissues. However, a number of issues remain concerning the use of MSCs, such as cell surface markers, the determining factors responsible for their differentiation to terminal cells, and the mechanisms whereby growth factors stimulate MSCs. In this chapter, we will discuss how proteomic techniques have contributed to our current knowledge and how they can be used to address issues currently facing MSC research. The application of proteomics has led to the identification of a special pattern of cell surface protein expression of MSCs. The technique has also contributed to the study of a regulatory network of MSC differentiation to terminal differentiated cells, including osteocytes, chondrocytes, adipocytes, neurons, cardiomyocytes, hepatocytes, and pancreatic islet cells. It has also helped elucidate mechanisms for growth factor–stimulated differentiation of MSCs. Proteomics can, however, not reveal the accurate role of a special pathway and must therefore be combined with other approaches for this purpose. A new generation of proteomic techniques have recently been developed, which will enable a more comprehensive study of MSCs. Keywords
Resumo:
In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.
Resumo:
Since the late twentieth century, there has been a shift away from delivery of infrastructure, including road networks, exclusively by the state. Subsequently, a range of alternative delivery models including governance networks have emerged. However, little is known about how connections between these networks and their stakeholders are created, managed or sustained. Using an analytical framework based on a synthesis of theories of network and stakeholder management, three cases in road infrastructure in Queensland, Australia are examined. The paper finds that although network management can be used to facilitate stakeholder engagement, such activities in the three cases are mainly focused within the core network of those most directly involved with delivery of the infrastructure often to the exclusion of other stakeholder groups.
Resumo:
The health effects of environmental hazards are often examined using time series of the association between a daily response variable (e.g., death) and a daily level of exposure (e.g., temperature). Exposures are usually the average from a network of stations. This gives each station equal importance, and negates the opportunity for some stations to be better measures of exposure. We used a Bayesian hierarchical model that weighted stations using random variables between zero and one. We compared the weighted estimates to the standard model using data on health outcomes (deaths and hospital admissions) and exposures (air pollution and temperature) in Brisbane, Australia. The improvements in model fit were relatively small, and the estimated health effects of pollution were similar using either the standard or weighted estimates. Spatial weighted exposures would be probably more worthwhile when there is either greater spatial detail in the health outcome, or a greater spatial variation in exposure.
Resumo:
The current rapid urban growth throughout the world manifests in various ways and historically cities have grown, similarly, alternately or simultaneously between planned extensions and organic informal settlements (Mumford, 1989). Within cities different urban morphological regions can reveal different contexts of economic growth and/or periods of dramatic social/technological change (Whitehand, 2001, 105). Morpho-typological study of alternate contexts can present alternative models and contribute to the present discourse which questions traditional paradigms of urban planning and design (Todes et al, 2010). In this study a series of cities are examined as a preliminary exploration into the urban morphology of cities in ‘humid subtropical’ climates. From an initial set of twenty, six cities were selected: Sao Paulo, Brazil; Jacksonville, USA; Maputo, Mozambique; Kanpur, India; Hong Kong, China; and Brisbane, Australia. The urban form was analysed from satellite imagery at a constant scale. Urban morphological regions (types) were identified as those demonstrating particular consistant characteristics of form (density, typology and pattern) different to their surroundings when examined at a constant scale. This analysis was correlated against existing data and literature discussing the proliferation of two types of urban development, ‘informal settlement’ (defined here as self-organised communities identifiable but not always synonymous with ‘slums’) and ‘suburbia’ (defined here as master planned communities of generally detached houses prevalent in western society) - the extreme ends of a hypothetical spectrum from ‘planned’ to ‘spontaneous’ urban development. Preliminary results show some cities contain a wide variety of urban form ranging from the highly organic ‘self-organised’ type to the highly planned ‘master planned community’ (in the case of Sao Paulo) while others tend to fall at one end of the planning spectrum or the other (more planned in the cases of Brisbane and Jacksonville; and both highly planned and highly organic in the case of Maputo). Further research will examine the social, economical and political drivers and controls which lead to this diversity or homogeneity of urban form and speculates on the role of self-organisation as a process for the adaptation of urban form.
Resumo:
Free association norms indicate that words are organized into semantic/associative neighborhoods within a larger network of words and links that bind the net together. We present evidence indicating that memory for a recent word event can depend on implicitly and simultaneously activating related words in its neighborhood. Processing a word during encoding primes its network representation as a function of the density of the links in its neighborhood. Such priming increases recall and recognition and can have long lasting effects when the word is processed in working memory. Evidence for this phenomenon is reviewed in extralist cuing, primed free association, intralist cuing, and single-item recognition tasks. The findings also show that when a related word is presented to cue the recall of a studied word, the cue activates it in an array of related words that distract and reduce the probability of its selection. The activation of the semantic network produces priming benefits during encoding and search costs during retrieval. In extralist cuing recall is a negative function of cue-to-distracter strength and a positive function of neighborhood density, cue-to-target strength, and target-to cue strength. We show how four measures derived from the network can be combined and used to predict memory performance. These measures play different roles in different tasks indicating that the contribution of the semantic network varies with the context provided by the task. We evaluate spreading activation and quantum-like entanglement explanations for the priming effect produced by neighborhood density.
Resumo:
Articular cartilage is a complex structure with an architecture in which fluid-swollen proteoglycans constrained within a 3D network of collagen fibrils. Because of the complexity of the cartilage structure, the relationship between its mechanical behaviours at the macroscale level and its components at the micro-scale level are not completely understood. The research objective in this thesis is to create a new model of articular cartilage that can be used to simulate and obtain insight into the micro-macro-interaction and mechanisms underlying its mechanical responses during physiological function. The new model of articular cartilage has two characteristics, namely: i) not use fibre-reinforced composite material idealization ii) Provide a framework for that it does probing the micro mechanism of the fluid-solid interaction underlying the deformation of articular cartilage using simple rules of repartition instead of constitutive / physical laws and intuitive curve-fitting. Even though there are various microstructural and mechanical behaviours that can be studied, the scope of this thesis is limited to osmotic pressure formation and distribution and their influence on cartilage fluid diffusion and percolation, which in turn governs the deformation of the compression-loaded tissue. The study can be divided into two stages. In the first stage, the distributions and concentrations of proteoglycans, collagen and water were investigated using histological protocols. Based on this, the structure of cartilage was conceptualised as microscopic osmotic units that consist of these constituents that were distributed according to histological results. These units were repeated three-dimensionally to form the structural model of articular cartilage. In the second stage, cellular automata were incorporated into the resulting matrix (lattice) to simulate the osmotic pressure of the fluid and the movement of water within and out of the matrix; following the osmotic pressure gradient in accordance with the chosen rule of repartition of the pressure. The outcome of this study is the new model of articular cartilage that can be used to simulate and study the micromechanical behaviours of cartilage under different conditions of health and loading. These behaviours are illuminated at the microscale level using the socalled neighbourhood rules developed in the thesis in accordance with the typical requirements of cellular automata modelling. Using these rules and relevant Boundary Conditions to simulate pressure distribution and related fluid motion produced significant results that provided the following insight into the relationships between osmotic pressure gradient and associated fluid micromovement, and the deformation of the matrix. For example, it could be concluded that: 1. It is possible to model articular cartilage with the agent-based model of cellular automata and the Margolus neighbourhood rule. 2. The concept of 3D inter connected osmotic units is a viable structural model for the extracellular matrix of articular cartilage. 3. Different rules of osmotic pressure advection lead to different patterns of deformation in the cartilage matrix, enabling an insight into how this micromechanism influences macromechanical deformation. 4. When features such as transition coefficient were changed, permeability (representing change) is altered due to the change in concentrations of collagen, proteoglycans (i.e. degenerative conditions), the deformation process is impacted. 5. The boundary conditions also influence the relationship between osmotic pressure gradient and fluid movement at the micro-scale level. The outcomes are important to cartilage research since we can use these to study the microscale damage in the cartilage matrix. From this, we are able to monitor related diseases and their progression leading to potential insight into drug-cartilage interaction for treatment. This innovative model is an incremental progress on attempts at creating further computational modelling approaches to cartilage research and other fluid-saturated tissues and material systems.
Resumo:
As one of the measures for decreasing road traffic noise in a city, the control of the traffic flow and the physical distribution is considered. To conduct the measure effectively, the model for predicting the traffic flow in the citywide road network is necessary. In this study, the existing model named AVENUE was used as a traffic flow prediction model. The traffic flow model was integrated with the road vehicles' sound power model and the sound propagation model, and the new road traffic noise prediction model was established. As a case study, the prediction model was applied to the road network of Tsukuba city in Japan and the noise map of the city was made. To examine the calculation accuracy of the noise map, the calculated values of the noise at the main roads were compared with the measured values. As a result, it was found that there was a possibility that the high accuracy noise map of the city could be made by using the noise prediction model developed in this study.
Resumo:
Bridges are currently rated individually for maintenance and repair action according to the structural conditions of their elements. Dealing with thousands of bridges and the many factors that cause deterioration, makes this rating process extremely complicated. The current simplified but practical methods are not accurate enough. On the other hand, the sophisticated, more accurate methods are only used for a single or particular bridge type. It is therefore necessary to develop a practical and accurate rating system for a network of bridges. The first most important step in achieving this aim is to classify bridges based on the differences in nature and the unique characteristics of the critical factors and the relationship between them, for a network of bridges. Critical factors and vulnerable elements will be identified and placed in different categories. This classification method will be used to develop a new practical rating method for a network of railway bridges based on criticality and vulnerability analysis. This rating system will be more accurate and economical as well as improve the safety and serviceability of railway bridges.
Resumo:
Objective: To explore the range of meanings about the role of support for patients with hepatitis C by examining medical specialists' perceptions. Method: The study employed a qualitative, open-ended interview design and was conducted in four major teaching hospitals in Adelaide, South Australia. Eight participants (three infectious disease physicians, four gastroenterologists, one hepatologist), selected through purposive sampling, were interviewed about general patient support, their role in support provision, the role of non-medical support and their reasons for not using support services. Results: Main themes included a focus on support as information provision and that patient education is best carried out by a medical specialist. The use of support services was defined as the patient's decision. Participants identified four key periods when patients would benefit from support; during diagnosis, failure to meet treatment criteria, during interferon treatment and following treatment failure. Conclusions: It was concluded that while barriers exist to the establishment of partnerships between specialists and other support services, this study has identified clear points at which future partnerships could be established. Implications: A partnership approach to developing support for patients with hepatitis C offers a systematic framework to facilitate the participation of health professionals and the community in an important area of public health.
Resumo:
The health impacts of exposure to ambient temperature have been drawing increasing attention from the environmental health research community, government, society, industries, and the public. Case-crossover and time series models are most commonly used to examine the effects of ambient temperature on mortality. However, some key methodological issues remain to be addressed. For example, few studies have used spatiotemporal models to assess the effects of spatial temperatures on mortality. Few studies have used a case-crossover design to examine the delayed (distributed lag) and non-linear relationship between temperature and mortality. Also, little evidence is available on the effects of temperature changes on mortality, and on differences in heat-related mortality over time. This thesis aimed to address the following research questions: 1. How to combine case-crossover design and distributed lag non-linear models? 2. Is there any significant difference in effect estimates between time series and spatiotemporal models? 3. How to assess the effects of temperature changes between neighbouring days on mortality? 4. Is there any change in temperature effects on mortality over time? To combine the case-crossover design and distributed lag non-linear model, datasets including deaths, and weather conditions (minimum temperature, mean temperature, maximum temperature, and relative humidity), and air pollution were acquired from Tianjin China, for the years 2005 to 2007. I demonstrated how to combine the case-crossover design with a distributed lag non-linear model. This allows the case-crossover design to estimate the non-linear and delayed effects of temperature whilst controlling for seasonality. There was consistent U-shaped relationship between temperature and mortality. Cold effects were delayed by 3 days, and persisted for 10 days. Hot effects were acute and lasted for three days, and were followed by mortality displacement for non-accidental, cardiopulmonary, and cardiovascular deaths. Mean temperature was a better predictor of mortality (based on model fit) than maximum or minimum temperature. It is still unclear whether spatiotemporal models using spatial temperature exposure produce better estimates of mortality risk compared with time series models that use a single site’s temperature or averaged temperature from a network of sites. Daily mortality data were obtained from 163 locations across Brisbane city, Australia from 2000 to 2004. Ordinary kriging was used to interpolate spatial temperatures across the city based on 19 monitoring sites. A spatiotemporal model was used to examine the impact of spatial temperature on mortality. A time series model was used to assess the effects of single site’s temperature, and averaged temperature from 3 monitoring sites on mortality. Squared Pearson scaled residuals were used to check the model fit. The results of this study show that even though spatiotemporal models gave a better model fit than time series models, spatiotemporal and time series models gave similar effect estimates. Time series analyses using temperature recorded from a single monitoring site or average temperature of multiple sites were equally good at estimating the association between temperature and mortality as compared with a spatiotemporal model. A time series Poisson regression model was used to estimate the association between temperature change and mortality in summer in Brisbane, Australia during 1996–2004 and Los Angeles, United States during 1987–2000. Temperature change was calculated by the current day's mean temperature minus the previous day's mean. In Brisbane, a drop of more than 3 �C in temperature between days was associated with relative risks (RRs) of 1.16 (95% confidence interval (CI): 1.02, 1.31) for non-external mortality (NEM), 1.19 (95% CI: 1.00, 1.41) for NEM in females, and 1.44 (95% CI: 1.10, 1.89) for NEM aged 65.74 years. An increase of more than 3 �C was associated with RRs of 1.35 (95% CI: 1.03, 1.77) for cardiovascular mortality and 1.67 (95% CI: 1.15, 2.43) for people aged < 65 years. In Los Angeles, only a drop of more than 3 �C was significantly associated with RRs of 1.13 (95% CI: 1.05, 1.22) for total NEM, 1.25 (95% CI: 1.13, 1.39) for cardiovascular mortality, and 1.25 (95% CI: 1.14, 1.39) for people aged . 75 years. In both cities, there were joint effects of temperature change and mean temperature on NEM. A change in temperature of more than 3 �C, whether positive or negative, has an adverse impact on mortality even after controlling for mean temperature. I examined the variation in the effects of high temperatures on elderly mortality (age . 75 years) by year, city and region for 83 large US cities between 1987 and 2000. High temperature days were defined as two or more consecutive days with temperatures above the 90th percentile for each city during each warm season (May 1 to September 30). The mortality risk for high temperatures was decomposed into: a "main effect" due to high temperatures using a distributed lag non-linear function, and an "added effect" due to consecutive high temperature days. I pooled yearly effects across regions and overall effects at both regional and national levels. The effects of high temperature (both main and added effects) on elderly mortality varied greatly by year, city and region. The years with higher heat-related mortality were often followed by those with relatively lower mortality. Understanding this variability in the effects of high temperatures is important for the development of heat-warning systems. In conclusion, this thesis makes contribution in several aspects. Case-crossover design was combined with distribute lag non-linear model to assess the effects of temperature on mortality in Tianjin. This makes the case-crossover design flexibly estimate the non-linear and delayed effects of temperature. Both extreme cold and high temperatures increased the risk of mortality in Tianjin. Time series model using single site’s temperature or averaged temperature from some sites can be used to examine the effects of temperature on mortality. Temperature change (no matter significant temperature drop or great temperature increase) increases the risk of mortality. The high temperature effect on mortality is highly variable from year to year.
Resumo:
Purpose – The purpose of this paper is to investigate information communications technologies (ICT)-mediated inclusion and exclusion in terms of sexuality through a study of a commercial social networking web site for gay men. Design/methodology/approach – The paper uses an approach based on technological inscription and the commodification of difference to study Gaydar, a commercial social networking site. Findings – Through the activities, events and interactions offered by Gaydar, the study identifies a series of contrasting identity constructions and market segmentations that are constructed through the cyclic commodification of difference. These are fuelled by a particular series of meanings attached to gay male sexualities which serve to keep gay men positioned as a niche market. Research limitations/implications – The research centres on the study of one, albeit widely used, web site with a very specific set of purposes. The study offers a model for future research on sexuality and ICTs. Originality/value – This study places sexuality centre stage in an ICT-mediated environment and provides insights into the contemporary phenomenon of social networking. As a sexualised object, Gaydar presents a semiosis of politicised messages that question heteronormativity while simultaneously contributing to the definition of an increasingly globalised, commercialised and monolithic form of gay male sexuality defined against ICT