122 resultados para Open Data, Dati Aperti, Open Government Data


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Australian Securities Exchange (ASX) listing rule 3.1 requires listed companies to immediately disclose price sensitive information to the market via the ASX’s Company Announcements Platform (CAP) prior to release through other disclosure channels. Since 1999, to improve the communication process, the ASX has permitted third-party mediation in the disclosure process that leads to the release of an Open Briefing (OB) through CAP. An OB is an interview between senior executives of the firm and an Open Briefing analyst employed by Orient Capital Pty Ltd (broaching topics such as current profit and outlook). Motivated by an absence of research on factors that influence firms to use OBs as a discretionary disclosure channel, this study examines (1) Why do firms choose to release information to the market via OBs?, (2) What are the firm characteristics that explain the discretionary use of OBs as a disclosure channel?, and (3) What are the disclosure attributes that influence firms’ decisions to regularly use OBs as a disclosure channel? Based on agency and information economics theories, a theoretical framework is developed to address research questions. This theoretical framework comprises disclosure environments such as firm characteristics and external factors, disclosure attributes and disclosure consequences. In order to address the first research question, the study investigates (i) the purpose of using OBs, (2) whether firms use OBs to provide information relating to previous public announcements, and (3) whether firms use OBs to provide routine or non-routine disclosures. In relation to the second and third research questions, hypotheses are developed to test factors expected to explain the discretionary use of OBs and firms’ decisions to regularly use OBs, and to explore the factors influencing the nature of OB disclosure. Content analysis and logistic regression models are used to investigate the research questions and test the hypotheses. Data are drawn from a hand-collected population of 1863 OB announcements issued by 239 listed firms between 2000 and 2010. The results show that types of information disclosed via an OB announcement are principally on matters relating to corporate strategies and performance and outlook. Most OB announcements are linked with a previous related announcement, with the lag between announcements significantly longer for loss-making firms than profitmaking firms. The main results show that firms which tend to be larger, have an analyst following, and have higher growth opportunities, are more likely to release OBs. Further, older firms and firms that release OB announcements containing good news, historical information and less complex information tend to be regular OB users. Lastly, firms more likely to disclose strategic information via OBs tend to operate in industries facing greater uncertainty, do not have analysts following, and have higher growth opportunities are less likely to disclose good news, historical information and complex information via OBs. This study is expected to contribute to disclosure literature in terms of disclosure attributes and firm characteristics that influence behaviour in this unique (OB) disclosure channel. With regard to practical significance, regulators can gain an understanding of how OBs are disclosed which can assist them in monitoring the use of OBs and improving the effectiveness of communications with stakeholders. In addition, investors can have a better comprehension of information contained in OB announcements, which may in turn better facilitate their investment decisions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Health complaint statistics are important for identifying problems and bringing about improvements to health care provided by health service providers and to the wider health care system. This paper overviews complaints handling by the eight Australian state and territory health complaint entities, based on an analysis of data from their annual reports. The analysis shows considerable variation between jurisdictions in the ways complaint data are defined, collected and recorded. Complaints from the public are an important accountability mechanism and open a window on service quality. The lack of a national approach leads to fragmentation of complaint data and a lost opportunity to use national data to assist policy development and identify the main areas causing consumers to complain. We need a national approach to complaints data collection in order to better respond to patients’ concerns.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aims: To identify risk factors for major Adverse Events (AEs) and to develop a nomogram to predict the probability of such AEs in individual patients who have surgery for apparent early stage endometrial cancer. Methods: We used data from 753 patients who were randomized to either total laparoscopic hysterectomy or total abdominal hysterectomy in the LACE trial. Serious adverse events that prolonged hospital stay or postoperative adverse events (using common terminology criteria 3+, CTCAE V3) were considered major AEs. We analyzed pre-surgical characteristics that were associated with the risk of developing major AEs by multivariate logistic regression. We identified a parsimonious model by backward stepwise logistic regression. The six most significant or clinically important variables were included in the nomogram to predict the risk of major AEs within 6 weeks of surgery and the nomogram was internally validated. Results: Overall, 132 (17.5%) patients had at least one major AE. An open surgical approach (laparotomy), higher Charlson’s medical co-morbidities score, moderately differentiated tumours on curettings, higher baseline ECOG score, higher body mass index and low haemoglobin levels were associated with AE and were used in the nomogram. The bootstrap corrected concordance index of the nomogram was 0.63 and it showed good calibration. Conclusions: Six pre-surgical factors independently predicted the risk of major AEs. This research might form the basis to develop risk reduction strategies to minimize the risk of AEs among patients undergoing surgery for apparent early stage endometrial cancer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The interactive art system +-now is modelled on the openness of the natural world. Emergent shapes constitute a novel method for facilitating this openness. With the art system as an example, the relationship between openness and emergence is discussed. Lastly, artist reflections from the creation of the work are presented. These describe the nature of open systems and how they may be created.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Computational Fluid Dynamics (CFD) simulations are widely used in mechanical engineering. Although achieving a high level of confidence in numerical modelling is of crucial importance in the field of turbomachinery, verification and validation of CFD simulations are very tricky especially for complex flows encountered in radial turbines. Comprehensive studies of radial machines are available in the literature. Unfortunately, none of them include enough detailed geometric data to be properly reproduced and so cannot be considered for academic research and validation purposes. As a consequence, design improvements of such configurations are difficult. Moreover, it seems that well-developed analyses of radial turbines are used in commercial software but are not available in the open literature especially at high pressure ratios. It is the purpose of this paper to provide a fully open set of data to reproduce the exact geometry of the high pressure ratio single stage radial-inflow turbine used in the Sundstrand Power Systems T-100 Multipurpose Small Power Unit. First, preliminary one-dimensional meanline design and analysis are performed using the commercial software RITAL from Concepts-NREC in order to establish a complete reference test case available for turbomachinery code validation. The proposed design of the existing turbine is then carefully and successfully checked against the geometrical and experimental data partially published in the literature. Then, three-dimensional Reynolds-Averaged Navier-Stokes simulations are conducted by means of the Axcent-PushButton CFDR CFD software. The effect of the tip clearance gap is investigated in detail for a wide range of operating conditions. The results confirm that the 3D geometry is correctly reproduced. It also reveals that the turbine is shocked while designed to give a high-subsonic flow and highlight the importance of the diffuser.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Accumulated biological research outcomes show that biological functions do not depend on individual genes, but on complex gene networks. Microarray data are widely used to cluster genes according to their expression levels across experimental conditions. However, functionally related genes generally do not show coherent expression across all conditions since any given cellular process is active only under a subset of conditions. Biclustering finds gene clusters that have similar expression levels across a subset of conditions. This paper proposes a seed-based algorithm that identifies coherent genes in an exhaustive, but efficient manner. Methods In order to find the biclusters in a gene expression dataset, we exhaustively select combinations of genes and conditions as seeds to create candidate bicluster tables. The tables have two columns: (a) a gene set, and (b) the conditions on which the gene set have dissimilar expression levels to the seed. First, the genes with less than the maximum number of dissimilar conditions are identified and a table of these genes is created. Second, the rows that have the same dissimilar conditions are grouped together. Third, the table is sorted in ascending order based on the number of dissimilar conditions. Finally, beginning with the first row of the table, a test is run repeatedly to determine whether the cardinality of the gene set in the row is greater than the minimum threshold number of genes in a bicluster. If so, a bicluster is outputted and the corresponding row is removed from the table. Repeating this process, all biclusters in the table are systematically identified until the table becomes empty. Conclusions This paper presents a novel biclustering algorithm for the identification of additive biclusters. Since it involves exhaustively testing combinations of genes and conditions, the additive biclusters can be found more readily.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

miRDeep and its varieties are widely used to quantify known and novel micro RNA (miRNA) from small RNA sequencing (RNAseq). This article describes miRDeep*, our integrated miRNA identification tool, which is modeled off miRDeep, but the precision of detecting novel miRNAs is improved by introducing new strategies to identify precursor miRNAs. miRDeep* has a user-friendly graphic interface and accepts raw data in FastQ and Sequence Alignment Map (SAM) or the binary equivalent (BAM) format. Known and novel miRNA expression levels, as measured by the number of reads, are displayed in an interface, which shows each RNAseq read relative to the pre-miRNA hairpin. The secondary pre-miRNA structure and read locations for each predicted miRNA are shown and kept in a separate figure file. Moreover, the target genes of known and novel miRNAs are predicted using the TargetScan algorithm, and the targets are ranked according to the confidence score. miRDeep* is an integrated standalone application where sequence alignment, pre-miRNA secondary structure calculation and graphical display are purely Java coded. This application tool can be executed using a normal personal computer with 1.5 GB of memory. Further, we show that miRDeep* outperformed existing miRNA prediction tools using our LNCaP and other small RNAseq datasets. miRDeep* is freely available online at http://www.australianprostatecentre.org/research/software/mirdeep-star

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Open the sports or business section of your daily newspaper, and you are immediately bombarded with an array of graphs, tables, diagrams, and statistical reports that require interpretation. Across all walks of life, the need to understand statistics is fundamental. Given that our youngsters’ future world will be increasingly data laden, scaffolding their statistical understanding and reasoning is imperative, from the early grades on. The National Council of Teachers of Mathematics (NCTM) continues to emphasize the importance of early statistical learning; data analysis and probability was the Council’s professional development “Focus of the Year” for 2007–2008. We need such a focus, especially given the results of the statistics items from the 2003 NAEP. As Shaughnessy (2007) noted, students’ performance was weak on more complex items involving interpretation or application of items of information in graphs and tables. Furthermore, little or no gains were made between the 2000 NAEP and the 2003 NAEP studies. One approach I have taken to promote young children’s statistical reasoning is through data modeling. Having implemented in grades 3 –9 a number of model-eliciting activities involving working with data (e.g., English 2010), I observed how competently children could create their own mathematical ideas and representations—before being instructed how to do so. I thus wished to introduce data-modeling activities to younger children, confi dent that they would likewise generate their own mathematics. I recently implemented data-modeling activities in a cohort of three first-grade classrooms of six year- olds. I report on some of the children’s responses and discuss the components of data modeling the children engaged in.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

RatSLAM is a navigation system based on the neural processes underlying navigation in the rodent brain, capable of operating with low resolution monocular image data. Seminal experiments using RatSLAM include mapping an entire suburb with a web camera and a long term robot delivery trial. This paper describes OpenRatSLAM, an open-source version of RatSLAM with bindings to the Robot Operating System framework to leverage advantages such as robot and sensor abstraction, networking, data playback, and visualization. OpenRatSLAM comprises connected ROS nodes to represent RatSLAM’s pose cells, experience map, and local view cells, as well as a fourth node that provides visual odometry estimates. The nodes are described with reference to the RatSLAM model and salient details of the ROS implementation such as topics, messages, parameters, class diagrams, sequence diagrams, and parameter tuning strategies. The performance of the system is demonstrated on three publicly available open-source datasets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For the evaluation, design, and planning of traffic facilities and measures, traffic simulation packages are the de facto tools for consultants, policy makers, and researchers. However, the available commercial simulation packages do not always offer the desired work flow and flexibility for academic research. In many cases, researchers resort to designing and building their own dedicated models, without an intrinsic incentive (or the practical means) to make the results available in the public domain. To make matters worse, a substantial part of these efforts pertains to rebuilding basic functionality and, in many respects, reinventing the wheel. This problem not only affects the research community but adversely affects the entire traffic simulation community and frustrates the development of traffic simulation in general. For this problem to be addressed, this paper describes an open source approach, OpenTraffic, which is being developed as a collaborative effort between the Queensland University of Technology, Australia; the National Institute of Informatics, Tokyo; and the Technical University of Delft, the Netherlands. The OpenTraffic simulation framework enables academies from geographic areas and disciplines within the traffic domain to work together and contribute to a specific topic of interest, ranging from travel choice behavior to car following, and from response to intelligent transportation systems to activity planning. The modular approach enables users of the software to focus on their area of interest, whereas other functional modules can be regarded as black boxes. Specific attention is paid to a standardization of data inputs and outputs for traffic simulations. Such standardization will allow the sharing of data with many existing commercial simulation packages.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) Library, like many other academic and research institution libraries in Australia, has been collaborating with a range of academic and service provider partners to develop a range of research data management services and collections. Three main strategies are being employed and an overview of process, infrastructure, usage and benefits is provided of each of these service aspects. The development of processes and infrastructure to facilitate the strategic identification and management of QUT developed datasets has been a major focus. A number of Australian National Data Service (ANDS) sponsored projects - including Seeding the Commons; Metadata Hub / Store; Data Capture and Gold Standard Record Exemplars have / will provide QUT with a data registry system, linkages to storage, processes for identifying and describing datasets, and a degree of academic awareness. QUT supports open access and has established a culture for making its research outputs available via the QUT ePrints institutional repository. Incorporating open access research datasets into the library collections is an equally important aspect of facilitating the adoption of data-centric eresearch methods. Some datasets are available commercially, and the library has collaborated with QUT researchers, in the QUT Business School especially strongly, to identify and procure a rapidly growing range of financial datasets to support research. The library undertakes licensing and uses the Library Resource Allocation to pay for the subscriptions. It is a new area of collection development for with much to be learned. The final strategy discussed is the library acting as “data broker”. QUT Library has been working with researchers to identify these datasets and undertake the licensing, payment and access as a centrally supported service on behalf of researchers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The impact of research can be measured by use or citation count. The more widely available that research outputs are; the more likely they are to be used, and the higher the impact. Making the author-manuscript version of research outputs freely available via the institutional repository greatly increases the availability of research outputs and can increase the impact. QUT ePrints, the open access institutional repository of research outputs at Queensland University of Technology (QUT), Australia, was established in 2003 and is managed by the QUT Library. The repository now contains over 39,000 records. More than 21,000 of these records have full-text copies attached as result of continuous effort to maintain momentum and encourage academic engagement. The full-text deposit rate has continued to increase over time and, in 2012 (August, at the time of writing), 88% of the records for works published in 2012 provide access to a full-text copy. Achieving success has required a long term approach to collaboration, open access advocacy, repository promotion, support for the deposit process, and ongoing system development. This paper discusses the various approaches adopted by QUT Library, in collaboration with other areas of the University, to achieve success. Approaches include mainstreaming the repository via having it report to the University Research and Innovation Committee; regular provision of deposit rate data to faculties; championing key academic supporters; and holding promotional competitions and events such as during Open Access Week. Support and training is provided via regular deposit workshops with academics and faculty research support groups and via the provision of online self-help information. Recent system developments have included the integration of citation data (from Scopus and Web of Science) and the development of a statistical reporting system which incentivise engagement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Council of Australian Governments (COAG) in 2003 gave in-principle approval to a best-practice report recommending a holistic approach to managing natural disasters in Australia incorporating a move from a traditional response-centric approach to a greater focus on mitigation, recovery and resilience with community well-being at the core. Since that time, there have been a range of complementary developments that have supported the COAG recommended approach. Developments have been administrative, legislative and technological, both, in reaction to the COAG initiative and resulting from regular natural disasters. This paper reviews the characteristics of the spatial data that is becoming increasingly available at Federal, state and regional jurisdictions with respect to their being fit for the purpose for disaster planning and mitigation and strengthening community resilience. In particular, Queensland foundation spatial data, which is increasingly accessible by the public under the provisions of the Right to Information Act 2009, Information Privacy Act 2009, and recent open data reform initiatives are evaluated. The Fitzroy River catchment and floodplain is used as a case study for the review undertaken. The catchment covers an area of 142,545 km2, the largest river catchment flowing to the eastern coast of Australia. The Fitzroy River basin experienced extensive flooding during the 2010–2011 Queensland floods. The basin is an area of important economic, environmental and heritage values and contains significant infrastructure critical for the mining and agricultural sectors, the two most important economic sectors for Queensland State. Consequently, the spatial datasets for this area play a critical role in disaster management and for protecting critical infrastructure essential for economic and community well-being. The foundation spatial datasets are assessed for disaster planning and mitigation purposes using data quality indicators such as resolution, accuracy, integrity, validity and audit trail.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distal radius fractures stabilized by open reduction internal fixation (ORIF) have become increasingly common. There is currently no consensus on the optimal time to commence range of motion (ROM) exercises post-ORIF. A retrospective cohort review was conducted over a five-year period to compare wrist and forearm range of motion outcomes and number of therapy sessions between patients who commenced active ROM exercises within the first seven days and from day eight onward following ORIF of distal radius fractures. One hundred and twenty-one patient cases were identified. Clinical data, active ROM at initial and discharge therapy assessments, fracture type, surgical approaches, and number of therapy sessions attended were recorded. One hundred and seven (88.4%) cases had complete datasets. The early active ROM group (n = 37) commenced ROM a mean (SD) of 4.27 (1.8) days post-ORIF. The comparator group (n = 70) commenced ROM exercises 24.3 (13.6) days post-ORIF. No significant differences were identified between groups in ROM at initial or discharge assessments, or therapy sessions attended. The results from this study indicate that patients who commenced active ROM exercises an average of 24 days after surgery achieved comparable ROM outcomes with similar number of therapy sessions to those who commenced ROM exercises within the first week.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) Library offers a range of resources and services to researchers as part of their research support portfolio. This poster will present key features of two of the data management services offered by research support staff at QUT Library. The first service is QUT Research Data Finder (RDF), a product of the Australian National Data Service (ANDS) funded Metadata Stores project. RDF is a data registry (metadata repository) that aims to publicise datasets that are research outputs arising from completed QUT research projects. The second is a software and code registry, which is currently under development with the sole purpose of improving discovery of source code and software as QUT research outputs. RESEARCH DATA FINDER As an integrated metadata repository, Research Data Finder aligns with institutional sources of truth, such as QUT’s research administration system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. The repository and its workflows are designed to foster better data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximise the impact of existing research data sets. SOFTWARE AND CODE REGISTRY The QUT Library software and code registry project stems from concerns amongst researchers with regards to development activities, storage, accessibility, discoverability and impact, sharing, copyright and IP ownership of software and code. As a result, the Library is developing a registry for code and software research outputs, which will use existing Research Data Finder architecture. The underpinning software for both registries is VIVO, open source software developed by Cornell University. The registry will use the Research Data Finder service instance of VIVO and will include a searchable interface, links to code/software locations and metadata feeds to Research Data Australia. Key benefits of the project include:improving the discoverability and reuse of QUT researchers’ code and software amongst QUT and the QUT research community; increasing the profile of QUT research outputs on a national level by providing a metadata feed to Research Data Australia, and; improving the metrics for access and reuse of code and software in the repository.