968 resultados para Aggregate ichthyofauna
Resumo:
This paper presents the results from a study of information behaviors in the context of people's everyday lives undertaken in order to develop an integrated model of information behavior (IB). 34 participants from across 6 countries maintained a daily information journal or diary – mainly through a secure web log – for two weeks, to an aggregate of 468 participant days over five months. The text-rich diary data was analyzed using a multi-method qualitative-quantitative analysis in the following order: Grounded Theory analysis with manual coding, automated concept analysis using thesaurus-based visualization, and finally a statistical analysis of the coding data. The findings indicate that people engage in several information behaviors simultaneously throughout their everyday lives (including home and work life) and that sense-making is entangled in all aspects of them. Participants engaged in many of the information behaviors in a parallel, distributed, and concurrent fashion: many information behaviors for one information problem, one information behavior across many information problems, and many information behaviors concurrently across many information problems. Findings indicate also that information avoidance – both active and passive avoidance – is a common phenomenon and that information organizing behaviors or the lack thereof caused the most problems for participants. An integrated model of information behaviors is presented based on the findings.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
In Australia and many other countries worldwide, water used in the manufacture of concrete must be potable. At present, it is currently thought that concrete properties are highly influenced by the water type used and its proportion in the concrete mix, but actually there is little knowledge of the effects of different, alternative water sources used in concrete mix design. Therefore, the identification of the level and nature of contamination in available water sources and their subsequent influence on concrete properties is becoming increasingly important. Of most interest, is the recycled washout water currently used by batch plants as mixing water for concrete. Recycled washout water is the water used onsite for a variety of purposes, including washing of truck agitator bowls, wetting down of aggregate and run off. This report presents current information on the quality of concrete mixing water in terms of mandatory limits and guidelines on impurities as well as investigating the impact of recycled washout water on concrete performance. It also explores new sources of recycled water in terms of their quality and suitability for use in concrete production. The complete recycling of washout water has been considered for use in concrete mixing plants because of the great benefit in terms of reducing the cost of waste disposal cost and environmental conservation. The objective of this study was to investigate the effects of using washout water on the properties of fresh and hardened concrete. This was carried out by utilizing a 10 week sampling program from three representative sites across South East Queensland. The sample sites chosen represented a cross-section of plant recycling methods, from most effective to least effective. The washout water samples collected from each site were then analysed in accordance with Standards Association of Australia AS/NZS 5667.1 :1998. These tests revealed that, compared with tap water, the washout water was higher in alkalinity, pH, and total dissolved solids content. However, washout water with a total dissolved solids content of less than 6% could be used in the production of concrete with acceptable strength and durability. These results were then interpreted using chemometric techniques of Principal Component Analysis, SIMCA and the Multi-Criteria Decision Making methods PROMETHEE and GAIA were used to rank the samples from cleanest to unclean. It was found that even the simplest purifying processes provided water suitable for the manufacture of concrete form wash out water. These results were compared to a series of alternative water sources. The water sources included treated effluent, sea water and dam water and were subject to the same testing parameters as the reference set. Analysis of these results also found that despite having higher levels of both organic and inorganic properties, the waters complied with the parameter thresholds given in the American Standard Test Method (ASTM) C913-08. All of the alternative sources were found to be suitable sources of water for the manufacture of plain concrete.
Resumo:
The soil C saturation concept suggests a limit to whole soil organic carbon (SOC) accumulation determined by inherent physicochemical characteristics of four soil C pools: unprotected, physically protected, chemically protected, and biochemically protected. Previous attempts to quantify soil C sequestration capacity have focused primarily on silt and clay protection and largely ignored the effects of soil structural protection and biochemical protection. We assessed two contrasting models of SOC accumulation, one with no saturation limit (i.e., linear first-order model) and one with an explicit soil C saturation limit (i.e., C saturation model). We isolated soil fractions corresponding to the C pools (i.e., free particulate organic matter POM], microaggregate-associated C, silt- and clay-associated C, and non-hydrolyzable C) from eight long-term agroecosystern experiments across the United States and Canada. Due to the composite nature of the physically protected C pool, we firactioned it into mineral- vs. POM-associated C. Within each site, the number of fractions fitting the C saturation model was directly related to maximum SOC content, suggesting that a broad range in SOC content is necessary to evaluate fraction C saturation. The two sites with the greatest SOC range showed C saturation behavior in the chemically, biochemically, and some mineral-associated fractions of the physically protected pool. The unprotected pool and the aggregate-protected POM showed linear, nonsaturating behavior. Evidence of C saturation of chemically and biochemically protected SOC pools was observed at sites far from their theoretical C saturation level, while saturation of aggregate-protected fractions occurred in soils closer to their C saturation level.
Resumo:
Since land use change can have significant impacts on regional biogeochemistry, we investigated how conversion of forest and cultivation to pasture impact soil C and N cycling. In addition to examining total soil C, we isolated soil physiochemical C fractions in order to understand the mechanisms by which soil C is sequestered or lost. Total soil C did not change significantly over time following conversion from forest, though coarse (250-2,000 mum) particulate organic matter C increased by a factor of 6 immediately after conversion. Aggregate mean weight diameter was reduced by about 50% after conversion, but values were like those under forest after 8 years under pasture. Samples collected from a long-term pasture that was converted from annual cultivation more than 50 years ago revealed that some soil physical properties negatively impacted by cultivation were very slow to recover. Finally, our results indicate that soil macroaggregates turn over more rapidly under pasture than under forest and are less efficient at stabilizing soil C, whereas microaggregates from pasture soils stabilize a larger concentration of C than forest microaggregates. Since conversion from forest to pasture has a minimal impact on total soil C content in the Piedmont region of Virginia, United States, a simple C stock accounting system could use the same base soil C stock value for either type of land use. However, since the effects of forest to pasture conversion are a function of grassland management following conversion, assessments of C sequestration rates require activity data on the extent of various grassland management practices.
Resumo:
The Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE) model provides an external validation capability for hot stabilized option; the model is one of several new modal emissions models designed to predict hot stabilized emission rates for various motor vehicle groups as a function of the conditions under which the vehicles are operating. The validation of aggregate measurements, such as speed and acceleration profile, is performed on an independent data set using three statistical criteria. The MEASURE algorithms have proved to provide significant improvements in both average emission estimates and explanatory power over some earlier models for pollutants across almost every operating cycle tested.
Resumo:
Many studies focused on the development of crash prediction models have resulted in aggregate crash prediction models to quantify the safety effects of geometric, traffic, and environmental factors on the expected number of total, fatal, injury, and/or property damage crashes at specific locations. Crash prediction models focused on predicting different crash types, however, have rarely been developed. Crash type models are useful for at least three reasons. The first is motivated by the need to identify sites that are high risk with respect to specific crash types but that may not be revealed through crash totals. Second, countermeasures are likely to affect only a subset of all crashes—usually called target crashes—and so examination of crash types will lead to improved ability to identify effective countermeasures. Finally, there is a priori reason to believe that different crash types (e.g., rear-end, angle, etc.) are associated with road geometry, the environment, and traffic variables in different ways and as a result justify the estimation of individual predictive models. The objectives of this paper are to (1) demonstrate that different crash types are associated to predictor variables in different ways (as theorized) and (2) show that estimation of crash type models may lead to greater insights regarding crash occurrence and countermeasure effectiveness. This paper first describes the estimation results of crash prediction models for angle, head-on, rear-end, sideswipe (same direction and opposite direction), and pedestrian-involved crash types. Serving as a basis for comparison, a crash prediction model is estimated for total crashes. Based on 837 motor vehicle crashes collected on two-lane rural intersections in the state of Georgia, six prediction models are estimated resulting in two Poisson (P) models and four NB (NB) models. The analysis reveals that factors such as the annual average daily traffic, the presence of turning lanes, and the number of driveways have a positive association with each type of crash, whereas median widths and the presence of lighting are negatively associated. For the best fitting models covariates are related to crash types in different ways, suggesting that crash types are associated with different precrash conditions and that modeling total crash frequency may not be helpful for identifying specific countermeasures.
Resumo:
The structure-building phenomena within clay aggregates are governed by forces acting between clay particles. The nature of such forces is important to understand in order to manipulate the aggregate structure for applications such as settling and dewatering. A parallel particle orientation is required when conducting force measurements acting between the basal planes of clay mineral platelets using atomic force microscopy (AFM). In order to prepare a film of clay particles with the optimal orientation for conducting AFM measurements, the influences of particle concentration in suspension, suspension pH and particle size on the clay platelet orientation were investigated using scanning electron microscopy (SEM) and X-ray diffraction (XRD) methods. From these investigations, we conclude that high clay (dry mass) concentrations and larger particle diameters (up to 5 µm) in suspension result in random orientation of platelets on the substrate. The best possible laminar orientation in the clay dried film as represented in the XRD by the 001/020 intensity ratio of more than 150 and by SE micrograph assessments, was obtained by drying thin layers from 0.2 wt% of -5 µm clay suspensions at pH 10.5. These dried films are stable and suitable for close-approach AFM studies in solution.
Resumo:
The aim of this work was to quantify exposure to particles emitted by wood-fired ovens in pizzerias. Overall, 15 microenvironments were chosen and analyzed in a 14-month experimental campaign. Particle number concentration and distribution were measured simultaneously using a Condensation Particle Counter (CPC), a Scanning Mobility Particle Sizer (SMPS), an Aerodynamic Particle Sizer (APS). The surface area and mass distributions and concentrations, as well as the estimation of lung deposition surface area and PM1 were evaluated using the SMPS-APS system with dosimetric models, by taking into account the presence of aggregates on the basis of the Idealized Aggregate (IA) theory. The fraction of inhaled particles deposited in the respiratory system and different fractions of particulate matter were also measured by means of a Nanoparticle Surface Area Monitor (NSAM) and a photometer (DustTrak DRX), respectively. In this way, supplementary data were obtained during the monitoring of trends inside the pizzerias. We found that surface area and PM1 particle concentrations in pizzerias can be very high, especially when compared to other critical microenvironments, such as the transport hubs. During pizza cooking under normal ventilation conditions, concentrations were found up to 74, 70 and 23 times higher than background levels for number, surface area and PM1, respectively. A key parameter is the oven shape factor, defined as the ratio between the size of the face opening in respect
Resumo:
Prostate cancer is the second most common cause of cancer-related deaths in Western males. Current diagnostic, prognostic and treatment approaches are not ideal and advanced metastatic prostate cancer is incurable. There is an urgent need for improved adjunctive therapies and markers for this disease. GPCRs are likely to play a significant role in the initiation and progression of prostate cancer. Over the last decade, it has emerged that G protein coupled receptors (GPCRs) are likely to function as homodimers and heterodimers. Heterodimerisation between GPCRs can result in the formation of novel pharmacological receptors with altered functional outcomes, and a number of GPCR heterodimers have been implicated in the pathogenesis of human disease. Importantly, novel GPCR heterodimers represent potential new targets for the development of more specific therapeutic drugs. Ghrelin is a 28 amino acid peptide hormone which has a unique n-octanoic acid post-translational modification. Ghrelin has a number of important physiological roles, including roles in appetite regulation and the stimulation of growth hormone release. The ghrelin receptor is the growth hormone secretagogue receptor type 1a, GHS-R1a, a seven transmembrane domain GPCR, and GHS-R1b is a C-terminally truncated isoform of the ghrelin receptor, consisting of five transmembrane domains. Growing evidence suggests that ghrelin and the ghrelin receptor isoforms, GHS-R1a and GHS-R1b, may have a role in the progression of a number of cancers, including prostate cancer. Previous studies by our research group have shown that the truncated ghrelin receptor isoform, GHS-R1b, is not expressed in normal prostate, however, it is expressed in prostate cancer. The altered expression of this truncated isoform may reflect a difference between a normal and cancerous state. A number of mutant GPCRs have been shown to regulate the function of their corresponding wild-type receptors. Therefore, we investigated the potential role of interactions between GHS-R1a and GHS-R1b, which are co-expressed in prostate cancer and aimed to investigate the function of this potentially new pharmacological receptor. In 2005, obestatin, a 23 amino acid C-terminally amidated peptide derived from preproghrelin was identified and was described as opposing the stimulating effects of ghrelin on appetite and food intake. GPR39, an orphan GPCR which is closely related to the ghrelin receptor, was identified as the endogenous receptor for obestatin. Recently, however, the ability of obestatin to oppose the effects of ghrelin on appetite and food intake has been questioned, and furthermore, it appears that GPR39 may in fact not be the obestatin receptor. The role of GPR39 in the prostate is of interest, however, as it is a zinc receptor. Zinc has a unique role in the biology of the prostate, where it is normally accumulated at high levels, and zinc accumulation is altered in the development of prostate malignancy. Ghrelin and zinc have important roles in prostate cancer and dimerisation of their receptors may have novel roles in malignant prostate cells. The aim of the current study, therefore, was to demonstrate the formation of GHS-R1a/GHS-R1b and GHS-R1a/GPR39 heterodimers and to investigate potential functions of these heterodimers in prostate cancer cell lines. To demonstrate dimerisation we first employed a classical co-immunoprecipitation technique. Using cells co-overexpressing FLAG- and Myc- tagged GHS-R1a, GHS-R1b and GPR39, we were able to co-immunoprecipitate these receptors. Significantly, however, the receptors formed high molecular weight aggregates. A number of questions have been raised over the propensity of GPCRs to aggregate during co-immunoprecipitation as a result of their hydrophobic nature and this may be misinterpreted as receptor dimerisation. As we observed significant receptor aggregation in this study, we used additional methods to confirm the specificity of these putative GPCR interactions. We used two different resonance energy transfer (RET) methods; bioluminescence resonance energy transfer (BRET) and fluorescence resonance energy transfer (FRET), to investigate interactions between the ghrelin receptor isoforms and GPR39. RET is the transfer of energy from a donor fluorophore to an acceptor fluorophore when they are in close proximity, and RET methods are, therefore, applicable to the observation of specific protein-protein interactions. Extensive studies using the second generation bioluminescence resonance energy transfer (BRET2) technology were performed, however, a number of technical limitations were observed. The substrate used during BRET2 studies, coelenterazine 400a, has a low quantum yield and rapid signal decay. This study highlighted the requirement for the expression of donor and acceptor tagged receptors at high levels so that a BRET ratio can be determined. After performing a number of BRET2 experimental controls, our BRET2 data did not fit the predicted results for a specific interaction between these receptors. The interactions that we observed may in fact represent ‘bystander BRET’ resulting from high levels of expression, forcing the donor and acceptor into close proximity. Our FRET studies employed two different FRET techniques, acceptor photobleaching FRET and sensitised emission FRET measured by flow cytometry. We were unable to observe any significant FRET, or FRET values that were likely to result from specific receptor dimerisation between GHS-R1a, GHS-R1b and GPR39. While we were unable to conclusively demonstrate direct dimerisation between GHS-R1a, GHS-R1b and GPR39 using several methods, our findings do not exclude the possibility that these receptors interact. We aimed to investigate if co-expression of combinations of these receptors had functional effects in prostate cancers cells. It has previously been demonstrated that ghrelin stimulates cell proliferation in prostate cancer cell lines, through ERK1/2 activation, and GPR39 can stimulate ERK1/2 signalling in response to zinc treatments. Additionally, both GHS-R1a and GPR39 display a high level of constitutive signalling and these constitutively active receptors can attenuate apoptosis when overexpressed individually in some cell types. We, therefore, investigated ERK1/2 and AKT signalling and cell survival in prostate cancer the potential modulation of these functions by dimerisation between GHS-R1a, GHS-R1b and GPR39. Expression of these receptors in the PC-3 prostate cancer cell line, either alone or in combination, did not alter constitutive ERK1/2 or AKT signalling, basal apoptosis or tunicamycin-stimulated apoptosis, compared to controls. In summary, the potential interactions between the ghrelin receptor isoforms, GHS-R1a and GHS-R1b, and the related zinc receptor, GPR39, and the potential for functional outcomes in prostate cancer were investigated using a number of independent methods. We did not definitively demonstrate the formation of these dimers using a number of state of the art methods to directly demonstrate receptor-receptor interactions. We investigated a number of potential functions of GPR39 and GHS-R1a in the prostate and did not observe altered function in response to co-expression of these receptors. The technical questions raised by this study highlight the requirement for the application of extensive controls when using current methods for the demonstration of GPCR dimerisation. Similar findings in this field reflect the current controversy surrounding the investigation of GPCR dimerisation. Although GHS-R1a/GHS-R1b or GHS-R1a/GPR39 heterodimerisation was not clearly demonstrated, this study provides a basis for future investigations of these receptors in prostate cancer. Additionally, the results presented in this study and growing evidence in the literature highlight the requirement for an extensive understanding of the experimental method and the performance of a range of controls to avoid the spurious interpretation of data gained from artificial expression systems. The future development of more robust techniques for investigating GPCR dimerisation is clearly required and will enable us to elucidate whether GHS-R1a, GHS-R1b and GPR39 form physiologically relevant dimers.
Resumo:
Purpose: Businesses cannot rely on their customers to always do the right thing. To help researchers and service providers better understand the dark (and light) side of customer behavior, this study aims to aggregate and investigate perceptions of consumer ethics from young consumers on five continents. The study seeks to present a profile of consumer behavioral norms, how ethical inclinations have evolved over time, and country differences. ---------- Design/methodology/approach: Data were collected from ten countries across five continents between 1997 and 2007. A self-administered questionnaire containing 14 consumer scenarios asked respondents to rate acceptability of questionable consumer actions. ---------- Findings: Overall, consumers found four of the 14 questionable consumer actions acceptable. Illegal activities were mostly viewed as unethical, while some legal actions that were against company policy were viewed less harshly. Differences across continents emerged, with Europeans being the least critical, while Asians and Africans shared duties as most critical of consumer actions. Over time, consumers have become less tolerant of questionable behaviors. ---------- Practical implications: Service providers should use the findings of this study to better understand the service customer. Knowing what customers in general believe is ethical or unethical can help service designers focus on the aspects of the technology or design most vulnerable to customer deviance. ---------- Multinationals already know they must adapt their business practices to the market in which they are operating, but they must also adapt their expectations as to the behavior of the corresponding consumer base. Originality/value: This investigation into consumer ethics helps businesses understand what their customer base believes is the right thing in their role as customer. This is a large-scale study of consumer ethics including 3,739 respondents on five continents offering an evolving view of the ethical inclinations of young consumers.
Resumo:
Many cities worldwide face the prospect of major transformation as the world moves towards a global information order. In this new era, urban economies are being radically altered by dynamic processes of economic and spatial restructuring. The result is the creation of ‘informational cities’ or its new and more popular name, ‘knowledge cities’. For the last two centuries, social production had been primarily understood and shaped by neo-classical economic thought that recognized only three factors of production: land, labor and capital. Knowledge, education, and intellectual capacity were secondary, if not incidental, factors. Human capital was assumed to be either embedded in labor or just one of numerous categories of capital. In the last decades, it has become apparent that knowledge is sufficiently important to deserve recognition as a fourth factor of production. Knowledge and information and the social and technological settings for their production and communication are now seen as keys to development and economic prosperity. The rise of knowledge-based opportunity has, in many cases, been accompanied by a concomitant decline in traditional industrial activity. The replacement of physical commodity production by more abstract forms of production (e.g. information, ideas, and knowledge) has, however paradoxically, reinforced the importance of central places and led to the formation of knowledge cities. Knowledge is produced, marketed and exchanged mainly in cities. Therefore, knowledge cities aim to assist decision-makers in making their cities compatible with the knowledge economy and thus able to compete with other cities. Knowledge cities enable their citizens to foster knowledge creation, knowledge exchange and innovation. They also encourage the continuous creation, sharing, evaluation, renewal and update of knowledge. To compete nationally and internationally, cities need knowledge infrastructures (e.g. universities, research and development institutes); a concentration of well-educated people; technological, mainly electronic, infrastructure; and connections to the global economy (e.g. international companies and finance institutions for trade and investment). Moreover, they must possess the people and things necessary for the production of knowledge and, as importantly, function as breeding grounds for talent and innovation. The economy of a knowledge city creates high value-added products using research, technology, and brainpower. Private and the public sectors value knowledge, spend money on its discovery and dissemination and, ultimately, harness it to create goods and services. Although many cities call themselves knowledge cities, currently, only a few cities around the world (e.g., Barcelona, Delft, Dublin, Montreal, Munich, and Stockholm) have earned that label. Many other cities aspire to the status of knowledge city through urban development programs that target knowledge-based urban development. Examples include Copenhagen, Dubai, Manchester, Melbourne, Monterrey, Singapore, and Shanghai. Knowledge-Based Urban Development To date, the development of most knowledge cities has proceeded organically as a dependent and derivative effect of global market forces. Urban and regional planning has responded slowly, and sometimes not at all, to the challenges and the opportunities of the knowledge city. That is changing, however. Knowledge-based urban development potentially brings both economic prosperity and a sustainable socio-spatial order. Its goal is to produce and circulate abstract work. The globalization of the world in the last decades of the twentieth century was a dialectical process. On one hand, as the tyranny of distance was eroded, economic networks of production and consumption were constituted at a global scale. At the same time, spatial proximity remained as important as ever, if not more so, for knowledge-based urban development. Mediated by information and communication technology, personal contact, and the medium of tacit knowledge, organizational and institutional interactions are still closely associated with spatial proximity. The clustering of knowledge production is essential for fostering innovation and wealth creation. The social benefits of knowledge-based urban development extend beyond aggregate economic growth. On the one hand is the possibility of a particularly resilient form of urban development secured in a network of connections anchored at local, national, and global coordinates. On the other hand, quality of place and life, defined by the level of public service (e.g. health and education) and by the conservation and development of the cultural, aesthetic and ecological values give cities their character and attract or repel the creative class of knowledge workers, is a prerequisite for successful knowledge-based urban development. The goal is a secure economy in a human setting: in short, smart growth or sustainable urban development.
Resumo:
At QUT research data refers to information that is generated or collected to be used as primary sources in the production of original research results, and which would be required to validate or replicate research findings (Callan, De Vine, & Baker, 2010). Making publicly funded research data discoverable by the broader research community and the public is a key aim of the Australian National Data Service (ANDS). Queensland University of Technology (QUT) has been innovating in this space by undertaking mutually dependant technical and content (metadata) focused projects funded by ANDS. Research Data Librarians identified and described datasets generated from Category 1 funded research at QUT, by interviewing researchers, collecting metadata and fashioning metadata records for upload to the Australian Research Data commons (ARDC) and exposure through the Research Data Australia interface. In parallel to this project, a Research Data Management Service and Metadata hub project were being undertaken by QUT High Performance Computing & Research Support specialists. These projects will collectively store and aggregate QUT’s metadata and research data from multiple repositories and administration systems and contribute metadata directly by OAI-PMH compliant feed to RDA. The pioneering nature of the work has resulted in a collaborative project dynamic where good data management practices and the discoverability and sharing of research data were the shared drivers for all activity. Each project’s development and progress was dependent on feedback from the other. The metadata structure evolved in tandem with the development of the repository and the development of the repository interface responded to meet the needs of the data interview process. The project environment was one of bottom-up collaborative approaches to process and system development which matched top-down strategic alliances crossing organisational boundaries in order to provide the deliverables required by ANDS. This paper showcases the work undertaken at QUT, focusing on the Seeding the Commons project as a case study, and illustrates how the data management projects are interconnected. It describes the processes and systems being established to make QUT research data more visible and the nature of the collaborations between organisational areas required to achieve this. The paper concludes with the Seeding the Commons project outcomes and the contribution this project made to getting more research data ‘out there’.
Resumo:
Background: In response to the need for more comprehensive quality assessment within Australian residential aged care facilities, the Clinical Care Indicator (CCI) Tool was developed to collect outcome data as a means of making inferences about quality. A national trial of its effectiveness and a Brisbane-based trial of its use within the quality improvement context determined the CCI Tool represented a potentially valuable addition to the Australian aged care system. This document describes the next phase in the CCI Tool.s development; the aims of which were to establish validity and reliability of the CCI Tool, and to develop quality indicator thresholds (benchmarks) for use in Australia. The CCI Tool is now known as the ResCareQA (Residential Care Quality Assessment). Methods: The study aims were achieved through a combination of quantitative data analysis, and expert panel consultations using modified Delphi process. The expert panel consisted of experienced aged care clinicians, managers, and academics; they were initially consulted to determine face and content validity of the ResCareQA, and later to develop thresholds of quality. To analyse its psychometric properties, ResCareQA forms were completed for all residents (N=498) of nine aged care facilities throughout Queensland. Kappa statistics were used to assess inter-rater and test-retest reliability, and Cronbach.s alpha coefficient calculated to determine internal consistency. For concurrent validity, equivalent items on the ResCareQA and the Resident Classification Scales (RCS) were compared using Spearman.s rank order correlations, while discriminative validity was assessed using known-groups technique, comparing ResCareQA results between groups with differing care needs, as well as between male and female residents. Rank-ordered facility results for each clinical care indicator (CCI) were circulated to the panel; upper and lower thresholds for each CCI were nominated by panel members and refined through a Delphi process. These thresholds indicate excellent care at one extreme and questionable care at the other. Results: Minor modifications were made to the assessment, and it was renamed the ResCareQA. Agreement on its content was reached after two Delphi rounds; the final version contains 24 questions across four domains, enabling generation of 36 CCIs. Both test-retest and inter-rater reliability were sound with median kappa values of 0.74 (test-retest) and 0.91 (inter-rater); internal consistency was not as strong, with a Chronbach.s alpha of 0.46. Because the ResCareQA does not provide a single combined score, comparisons for concurrent validity were made with the RCS on an item by item basis, with most resultant correlations being quite low. Discriminative validity analyses, however, revealed highly significant differences in total number of CCIs between high care and low care groups (t199=10.77, p=0.000), while the differences between male and female residents were not significant (t414=0.56, p=0.58). Clinical outcomes varied both within and between facilities; agreed upper and lower thresholds were finalised after three Delphi rounds. Conclusions: The ResCareQA provides a comprehensive, easily administered means of monitoring quality in residential aged care facilities that can be reliably used on multiple occasions. The relatively modest internal consistency score was likely due to the multi-factorial nature of quality, and the absence of an aggregate result for the assessment. Measurement of concurrent validity proved difficult in the absence of a gold standard, but the sound discriminative validity results suggest that the ResCareQA has acceptable validity and could be confidently used as an indication of care quality within Australian residential aged care facilities. The thresholds, while preliminary due to small sample size, enable users to make judgements about quality within and between facilities. Thus it is recommended the ResCareQA be adopted for wider use.
Resumo:
In many product categories of durable goods such as TV, PC, and DVD player, the largest component of sales is generated by consumers replacing existing units. Aggregate sales models proposed by diffusion of innovation researchers for the replacement component of sales have incorporated several different replacement distributions such as Rayleigh, Weibull, Truncated Normal and Gamma. Although these alternative replacement distributions have been tested using both time series sales data and individual-level actuarial “life-tables” of replacement ages, there is no census on which distributions are more appropriate to model replacement behaviour. In the current study we are motivated to develop a new “modified gamma” distribution by two reasons. First we recognise that replacements have two fundamentally different drivers – those forced by failure and early, discretionary replacements. The replacement distribution for each of these drivers is expected to be quite different. Second, we observed a poor fit of other distributions to out empirical data. We conducted a survey of 8,077 households to empirically examine models of replacement sales for six electronic consumer durables – TVs, VCRs, DVD players, digital cameras, personal and notebook computers. This data allows us to construct individual-level “life-tables” for replacement ages. We demonstrate the new modified gamma model fits the empirical data better than existing models for all six products using both a primary and a hold-out sample.