948 resultados para Map of the Courts
Resumo:
The role of the judiciary in common law systems is to create law, interpret law and uphold the law. As such decisions by courts on matters related to ecologically sustainable development, natural resource use and management and climate change make an important contribution to earth jurisprudence. There are examples where judicial decisions further the goals of earth jurisprudence and examples where decisions go against the principles of earth jurisprudence. This presentation will explore judicial approaches to standing in Australia and America. The paper will explore two trends in each jurisdiction. Approaches by American courts to standing will be examined in reference to climate change and environmental justice litigation. While Australian approaches to standing will be examined in the context of public interest litigation and environmental criminal negligence cases. The presentation will draw some conclusions about the role of standing in each of these cases and implications of this for earth jurisprudence.
Resumo:
This article considers the uncertainty surrounding the scope of the best interests duty which forms part of the Government’s Future of Financial Advice (FOFA) reforms. It is likely to be many years before the courts can interpret and clarify the content of the duty. Under the new regime, the provision of personal financial advice will be made more difficult, complex and costly and these costs will be passed on to consumers. The article also considers whether there will still be scope for delivering standardized, non-tailored advice in the light of the best interests duty. In the pas standardized advice has allowed large amounts of low-level, generic advice to be delivered very efficiently. In order to avoid breaching the best interests duty standardized advice should only be used rarely, and only after a careful assessment has been made to ensure that a standardized approach is appropriate.
Resumo:
This article analyses the inconsistent approaches taken by courts when interpreting provisions of the Corporations Act which address debts or expenses “incurred” by receivers, administrators and liquidators. The article contends for a consistent construction of these provisions which will enable the legislation to operate (as was intended) for the benefit of persons who supply goods, services or labour to companies in external administration. The article explains how and why debts can be “incurred” by insolvency practitioners continuing on pre-existing contracts. Specifically, the article contends for a construction of ss 419 and 443A of the Corporations Act which renders receivers and administrators personally liable for certain entitlements of employees (eg, wages and superannuation contributions) which become due and payable by reason of the decision of a receiver or administrator to continue a pre-existing contract rather than terminate it.
Resumo:
Population-wide associations between loci due to linkage disequilibrium can be used to map quantitative trait loci (QTL) with high resolution. However, spurious associations between markers and QTL can also arise as a consequence of population stratification. Statistical methods that cannot differentiate between loci associations due to linkage disequilibria from those caused in other ways can render false-positive results. The transmission-disequilibrium test (TDT) is a robust test for detecting QTL. The TDT exploits within-family associations that are not affected by population stratification. However, some TDTs are formulated in a rigid-form, with reduced potential applications. In this study we generalize TDT using mixed linear models to allow greater statistical flexibility. Allelic effects are estimated with two independent parameters: one exploiting the robust within-family information and the other the potentially biased between-family information. A significant difference between these two parameters can be used as evidence for spurious association. This methodology was then used to test the effects of the fourth melanocortin receptor (MC4R) on production traits in the pig. The new analyses supported the previously reported results; i.e., the studied polymorphism is either causal of in very strong linkage disequilibrium with the causal mutation, and provided no evidence for spurious association.
Resumo:
As one of the measures for decreasing road traffic noise in a city, the control of the traffic flow and the physical distribution is considered. To conduct the measure effectively, the model for predicting the traffic flow in the citywide road network is necessary. In this study, the existing model named AVENUE was used as a traffic flow prediction model. The traffic flow model was integrated with the road vehicles' sound power model and the sound propagation model, and the new road traffic noise prediction model was established. As a case study, the prediction model was applied to the road network of Tsukuba city in Japan and the noise map of the city was made. To examine the calculation accuracy of the noise map, the calculated values of the noise at the main roads were compared with the measured values. As a result, it was found that there was a possibility that the high accuracy noise map of the city could be made by using the noise prediction model developed in this study.
Resumo:
The trial in Covecorp Constructions Pty Ltd v Indigo Projects Pty Ltd (File no BS 10157 of 2001; BS 2763 of 2002) commenced on 8 October 2007 before Fryberg J, but the matter settled on 6 November 2007 before the conclusion of the trial. This case was conducted as an “electronic trial” with the use of technology developed within the court. This was the first case in Queensland to employ this technology at trial level. The Court’s aim was to find a means to capture the key benefits which are offered by the more sophisticated trial presentation software of commercial service providers, in a way that was inexpensive for the parties and would facilitate the adoption of technology at trial much more broadly than has been the case to date.
Resumo:
Social media tools are often the result of innovations in Information Technology and developed by IT professionals and innovators. Nevertheless, IT professionals, many of whom are responsible for designing and building social media technologies, have not been investigated on how they themselves use or experience social media for professional purposes. This study will use Information Grounds Theory (Pettigrew, 1998) as a framework to study IT professionals’ experience in using social media for professional purposes. Information grounds facilitates the opportunistic discovery of information within social settings created temporarily at a place where people gather for a specific purpose (e.g., doctors’ waiting rooms, office tea rooms etc.), but the social atmosphere stimulates spontaneous sharing of information (Pettigrew, 1999). This study proposes that social media has the qualities that make it a rich information grounds; people participate from separate “places” in cyberspace in a synchronous manner in real-time, making it almost as dynamic and unplanned as physical information grounds. There is limited research on how social media platforms are perceived as a “place,” (a place to go to, a place to gather, or a place to be seen in) that is comparable to physical spaces. There is also no empirical study on how IT professionals use or “experience” social media. The data for this study is being collected through a study of IT professionals who currently use Twitter. A digital ethnography approach is being taken wherein the researcher uses online observations and “follows” the participants online and observes their behaviours and interactions on social media. Next, a sub-set of participants will be interviewed on their experiences with and within social media and how social media compares with traditional methods of information grounds, information communication, and collaborative environments. An Evolved Grounded Theory (Glaser, 1992) approach will be used to analyse tweets data and interviews and to map the findings against the Information Ground Theory. Findings from this study will provide foundational understanding of IT professionals’ experiences within social media, and can help both professionals and researchers understand this fast-evolving method of communications.
Resumo:
Multiple sclerosis (MS) is a common chronic inflammatory disease of the central nervous system. Susceptibility to the disease is affected by both environmental and genetic factors. Genetic factors include haplotypes in the histocompatibility complex (MHC) and over 50 non-MHC loci reported by genome-wide association studies. Amongst these, we previously reported polymorphisms in chromosome 12q13-14 with a protective effect in individuals of European descent. This locus spans 288 kb and contains 17 genes, including several candidate genes which have potentially significant pathogenic and therapeutic implications. In this study, we aimed to fine-map this locus. We have implemented a two-phase study: a variant discovery phase where we have used next-generation sequencing and two target-enrichment strategies [long-range polymerase chain reaction (PCR) and Nimblegen's solution phase hybridization capture] in pools of 25 samples; and a genotyping phase where we genotyped 712 variants in 3577 healthy controls and 3269 MS patients. This study confirmed the association (rs2069502, P = 9.9 × 10−11, OR = 0.787) and narrowed down the locus of association to an 86.5 kb region. Although the study was unable to pinpoint the key-associated variant, we have identified a 42 (genotyped and imputed) single-nucleotide polymorphism haplotype block likely to harbour the causal variant. No evidence of association at previously reported low-frequency variants in CYP27B1 was observed. As part of the study we compared variant discovery performance using two target-enrichment strategies. We concluded that our pools enriched with Nimblegen's solution phase hybridization capture had better sensitivity to detect true variants than the pools enriched with long-range PCR, whilst specificity was better in the long-range PCR-enriched pools compared with solution phase hybridization capture enriched pools; this result has important implications for the design of future fine-mapping studies.
Resumo:
Insulated Rail Joints (IRJs) are designed to electrically isolate two rails in rail tracks to control the signalling system for safer train operations. Unfortunately the gapped section of the IRJs is structurally weak and often fails prematurely especially in heavy haul tracks, which adversely affects service reliability and efficiency. The IRJs suffer from a number of failure modes; the railhead ratchetting at the gap is, however, regarded as the root cause and attended to in this thesis. Ratchetting increases with the increase in wheel loads; in the absence of a life prediction model, effective management of the IRJs for increased wagon wheel loads has become very challenging. Therefore, the main aim of this thesis is to determine method to predict IRJs' service life. The distinct discontinuity of the railhead at the gap makes the Hertzian theory and the rolling contact shakedown map, commonly used in the continuously welded rails, not applicable to examine the metal ratchetting of the IRJs. Finite Element (FE) technique is, therefore, used to explore the railhead metal ratchetting characteristics in this thesis, the boundary conditions of which has been determined from a full scale study of the IRJ specimens under rolling contact of the loaded wheels. A special purpose test set up containing full-scale wagon wheel was used to apply rolling wheel loads on the railhead edges of the test specimens. The state of the rail end face strains was determined using a non-contact digital imaging technique and used for calibrating the FE model. The basic material parameters for this FE model were obtained through independent uniaxial, monotonic tensile tests on specimens cut from the head hardened virgin rails. The monotonic tensile test data have been used to establish a cyclic load simulation model of the railhead steel specimen; the simulated cyclic load test has provided the necessary data for the three decomposed kinematic hardening plastic strain accumulation model of Chaboche. A performance based service life prediction algorithm for the IRJs was established using the plastic strain accumulation obtained from the Chaboche model. The predicted service lives of IRJs using this algorithm have agreed well with the published data. The finite element model has been used to carry out a sensitivity study on the effects of wheel diameter to the railhead metal plasticity. This study revealed that the depth of the plastic zone at the railhead edges is independent of the wheel diameter; however, large wheel diameter is shown to increase the IRJs' service life.
Resumo:
Purpose Cancer cells have been shown to be more susceptible to Ran knockdown than normal cells. We now investigate whether Ran is a potential therapeutic target of cancers with frequently found mutations that lead to higher Ras/MEK/ERK [mitogen-activated protein/extracellular signal-regulated kinase (ERK; MEK)] and phosphoinositide 3-kinase (PI3K)/Akt/mTORC1 activities. Experimental Design Apoptosis was measured by flow cytometry [propidium iodide (PI) and Annexin V staining] and MTT assay in cancer cells grown under different conditions after knockdown of Ran. The correlations between Ran expression and patient survival were examined in breast and lung cancers. Results Cancer cells with their PI3K/Akt/mTORC1 and Ras/MEK/ERK pathways inhibited are less susceptible to Ran silencing-induced apoptosis. K-Ras-mutated, c-Met-amplified, and Pten-deleted cancer cells are also more susceptible to Ran silencing-induced apoptosis than their wild-type counterparts and this effect is reduced by inhibitors of the PI3K/Akt/mTORC1 and MEK/ERK pathways. Overexpression of Ran in clinical specimens is significantly associated with poor patient outcome in both breast and lung cancers. This association is dramatically enhanced in cancers with increased c-Met or osteopontin expression, or with oncogenic mutations of K-Ras or PIK3CA, all of which are mutations that potentially correlate with activation of the PI3K/Akt/mTORC1 and/or Ras/MEK/ERK pathways. Silencing Ran also results in dysregulation of nucleocytoplasmic transport of transcription factors and downregulation of Mcl-1 expression, at the transcriptional level, which are reversed by inhibitors of the PI3K/Akt/mTORC1 and MEK/ERK pathways. Conclusion Ran is a potential therapeutic target for treatment of cancers with mutations/changes of expression in protooncogenes that lead to activation of the PI3K/Akt/mTORC1 and Ras/MEK/ERK pathways. ©2011 AACR.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics and it can obtain a better solution in a reasonable time. Furthermore, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement which puts a fixed number of mapper/reducer on each machine. The comparison results show that the computation using our mapper/reducer placement is much cheaper than the computation using the conventional placement while still satisfying the computation deadline.
Resumo:
Computational models represent a highly suitable framework, not only for testing biological hypotheses and generating new ones but also for optimising experimental strategies. As one surveys the literature devoted to cancer modelling, it is obvious that immense progress has been made in applying simulation techniques to the study of cancer biology, although the full impact has yet to be realised. For example, there are excellent models to describe cancer incidence rates or factors for early disease detection, but these predictions are unable to explain the functional and molecular changes that are associated with tumour progression. In addition, it is crucial that interactions between mechanical effects, and intracellular and intercellular signalling are incorporated in order to understand cancer growth, its interaction with the extracellular microenvironment and invasion of secondary sites. There is a compelling need to tailor new, physiologically relevant in silico models that are specialised for particular types of cancer, such as ovarian cancer owing to its unique route of metastasis, which are capable of investigating anti-cancer therapies, and generating both qualitative and quantitative predictions. This Commentary will focus on how computational simulation approaches can advance our understanding of ovarian cancer progression and treatment, in particular, with the help of multicellular cancer spheroids, and thus, can inform biological hypothesis and experimental design.
Resumo:
BACKGROUND: The increasing number of assembled mammalian genomes makes it possible to compare genome organisation across mammalian lineages and reconstruct chromosomes of the ancestral marsupial and therian (marsupial and eutherian) mammals. However, the reconstruction of ancestral genomes requires genome assemblies to be anchored to chromosomes. The recently sequenced tammar wallaby (Macropus eugenii) genome was assembled into over 300,000 contigs. We previously devised an efficient strategy for mapping large evolutionarily conserved blocks in non-model mammals, and applied this to determine the arrangement of conserved blocks on all wallaby chromosomes, thereby permitting comparative maps to be constructed and resolve the long debated issue between a 2n=14 and 2n=22 ancestral marsupial karyotype. RESULTS: We identified large blocks of genes conserved between human and opossum, and mapped genes corresponding to the ends of these blocks by fluorescence in situ hybridization (FISH). A total of 242 genes was assigned to wallaby chromosomes in the present study, bringing the total number of genes mapped to 554 and making it the most densely cytogenetically mapped marsupial genome. We used these gene assignments to construct comparative maps between wallaby and opossum, which uncovered many intrachromosomal rearrangements, particularly for genes found on wallaby chromosomes X and 3. Expanding comparisons to include chicken and human permitted the putative ancestral marsupial (2n=14) and therian mammal (2n=19) karyotypes to be reconstructed. CONCLUSIONS: Our physical mapping data for the tammar wallaby has uncovered the events shaping marsupial genomes and enabled us to predict the ancestral marsupial karyotype, supporting a 2n=14 ancestor. Futhermore, our predicted therian ancestral karyotype has helped to understand the evolution of the ancestral eutherian genome.