472 resultados para Common core standards
Resumo:
Traditional recommendation methods offer items, that are inanimate and one way recommendation, to users. Emerging new applications such as online dating or job recruitments require reciprocal people-to-people recommendations that are animate and two-way recommendations. In this paper, we propose a reciprocal collaborative method based on the concepts of users' similarities and common neighbors. The dataset employed for the experiment is gathered from a real life online dating network. The proposed method is compared with baseline methods that use traditional collaborative algorithms. Results show the proposed method can achieve noticeably better performance than the baseline methods.
Resumo:
Over the last twenty years, the use of open content licenses has become increasingly and surprisingly popular. The use of such licences challenges the traditional incentive-based model of exclusive rights under copyright. Instead of providing a means to charge for the use of particular works, what seems important is mitigating against potential personal harm to the author and, in some cases, preventing non-consensual commercial exploitation. It is interesting in this context to observe the primacy of what are essentially moral rights over the exclusionary economic rights. The core elements of common open content licences map somewhat closely to continental conceptions of the moral rights of authorship. Most obviously, almost all free software and free culture licences require attribution of authorship. More interestingly, there is a tension between social norms developed in free software communities and those that have emerged in the creative arts over integrity and commercial exploitation. For programmers interested in free software, licence terms that prohibit commercial use or modification are almost completely inconsistent with the ideological and utilitarian values that underpin the movement. For those in the creative industries, on the other hand, non-commercial terms and, to a lesser extent, terms that prohibit all but verbatim distribution continue to play an extremely important role in the sharing of copyright material. While prohibitions on commercial use often serve an economic imperative, there is also a certain personal interest for many creators in avoiding harmful exploitation of their expression – an interest that has sometimes been recognised as forming a component of the moral right of integrity. One particular continental moral right – the right of withdrawal – is present neither in Australian law or in any of the common open content licences. Despite some marked differences, both free software and free culture participants are using contractual methods to articulate the norms of permissible sharing. Legal enforcement is rare and often prohibitively expensive, and the various communities accordingly rely upon shared understandings of acceptable behaviour. The licences that are commonly used represent a formalised expression of these community norms and provide the theoretically enforceable legal baseline that lends them legitimacy. The core terms of these licences are designed primarily to alleviate risk in sharing and minimise transaction costs in sharing and using copyright expression. Importantly, however, the range of available licences reflect different optional balances in the norms of creating and sharing material. Generally, it is possible to see that, stemming particularly from the US, open content licences are fundamentally important in providing a set of normatively accepted copyright balances that reflect the interests sought to be protected through moral rights regimes. As the cost of creation, distribution, storage, and processing of expression continues to fall towards zero, there are increasing incentives to adopt open content licences to facilitate wide distribution and reuse of creative expression. Thinking of these protocols not only as reducing transaction costs but of setting normative principles of participation assists in conceptualising the role of open content licences and the continuing tensions that permeate modern copyright law.
Resumo:
The common brown leafhopper Orosius orientalis (Hemiptera: Cicadellidae) is a polyphagous vector of a range of economically important pathogens, including phytoplasmas and viruses, which infect a diverse range of crops. Studies on the plant penetration behaviour by O. orientalis were conducted using the electrical penetration graph (EPG) technique to assist in the characterisation of pathogen acquisition and transmission. EPG waveforms representing different probing activities were acquired from adult O. orientalis probing in planta, using two host species, tobacco Nicotiana tabacum and bean Phaseolus vulgaris, and in vitro using a simple sucrose-based artificial diet. Five waveforms (O1–O5) were evident when O. orientalis fed on bean, whereas only four waveforms (O1–O4) and three waveforms (O1–O3) were observed when the leafhopper fed on tobacco and on the artificial diet, respectively. Both the mean duration of each waveform and waveform type differed markedly depending on the food substrate. Waveform O4 was not observed on the artificial diet and occurred relatively rarely on tobacco plants when compared with bean plants. Waveform O5 was only observed with leafhoppers probing on beans. The attributes of the waveforms and comparative analyses with previously published Hemipteran data are presented and discussed, but further characterisation studies will be needed to confirm our suggestions.
Resumo:
Objective: Radiation safety principles dictate that imaging procedures should minimise the radiation risks involved, without compromising diagnostic performance. This study aims to define a core set of views that maximises clinical information yield for minimum radiation risk. Angiographers would supplement these views as clinically indicated. Methods: An algorithm was developed to combine published data detailing the quality of information derived for the major coronary artery segments through the use of a common set of views in angiography with data relating to the dose–area product and scatter radiation associated with these views. Results: The optimum view set for the left coronary system comprised four views: left anterior oblique (LAO) with cranial (Cr) tilt, shallow right anterior oblique (AP-RAO) with caudal (Ca) tilt, RAO with Ca tilt and AP-RAO with Cr tilt. For the right coronary system three views were identified: LAO with Cr tilt, RAO and AP-RAO with Cr tilt. An alternative left coronary view set including a left lateral achieved minimally superior efficiency (,5%), but with an ,8% higher radiation dose to the patient and 40% higher cardiologist dose. Conclusion: This algorithm identifies a core set of angiographic views that optimises the information yield and minimises radiation risk. This basic data set would be supplemented by additional clinically determined views selected by the angiographer for each case. The decision to use additional views for diagnostic angiography and interventions would be assisted by referencing a table of relative radiation doses for the views being considered.
Resumo:
This paper presents an approach to derive requirements for an avionics architecture that provides onboard sense-and-avoid and autonomous emergency forced landing capabilities to a UAS. The approach is based on two design paradigms that (1) derive requirements analyzing the common functionality between these two functions to then derive requirements for sensors, computing capability, interfaces, etc. (2) consider the risk and safety mitigation associated with these functions to derive certification requirements for the system design. We propose to use the Aircraft Certification Matrix (ACM) approach to tailor the system Development Assurance Levels (DAL) and architecture requirements in accordance with acceptable risk criteria. This architecture is developed under the name “Flight Guardian”. Flight Guardian is an avionics architecture that integrates common sensory elements that are essential components of any UAS that is required to be dependable. The Flight Guardian concept is also applicable to conventionally piloted aircraft, where it will serve to reduce cockpit workload.
Resumo:
The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology(1) even in complex tissue sections(2). Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells(3), however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.
Resumo:
Background On-site wastewater treatment system (OWTS) siting, design and management has traditionally been based on site specific conditions with little regard to the surrounding environment or the cumulative effect of other systems in the environment. The general approach has been to apply the same framework of standards and regulations to all sites equally, regardless of the sensitivity, or lack thereof, to the receiving environment. Consequently, this has led to the continuing poor performance and failure of on-site systems, resulting in environmental and public health consequences. As a result, there is increasing realisation that more scientifically robust evaluations in regard to site assessment and the underlying ground conditions are needed. Risk-based approaches to on-site system siting, design and management are considered the most appropriate means of improvement to the current standards and codes for on-site wastewater treatment systems. The Project Research in relation to this project was undertaken within the Gold Coast City Council region, the major focus being the semi-urban, rural residential and hinterland areas of the city that are not serviced by centralised treatment systems. The Gold Coast has over 15,000 on-site systems in use, with approximately 66% being common septic tank-subsurface dispersal systems. A recent study evaluating the performance of these systems within the Gold Coast area showed approximately 90% were not meeting the specified guidelines for effluent treatment and dispersal. The main focus of this research was to incorporate strong scientific knowledge into an integrated risk assessment process to allow suitable management practices to be set in place to mitigate the inherent risks. To achieve this, research was undertaken focusing on three main aspects involved with the performance and management of OWTS. Firstly, an investigation into the suitability of soil for providing appropriate effluent renovation was conducted. This involved detailed soil investigations, laboratory analysis and the use of multivariate statistical methods for analysing soil information. The outcomes of these investigations were developed into a framework for assessing soil suitability for effluent renovation. This formed the basis for the assessment of OWTS siting and design risks employed in the developed risk framework. Secondly, an assessment of the environmental and public health risks was performed specifically related the release of contaminants from OWTS. This involved detailed groundwater and surface water sampling and analysis to assess the current and potential risks of contamination throughout the Gold Coast region. Additionally, the assessment of public health risk incorporated the use of bacterial source tracking methods to identify the different sources of fecal contamination within monitored regions. Antibiotic resistance pattern analysis was utilised to determine the extent of human faecal contamination, with the outcomes utilised for providing a more indicative public health assessment. Finally, the outcomes of both the soil suitability assessment and ground and surface water monitoring was utilised for the development of the integrated risk framework. The research outcomes achieved through this project enabled the primary research aims and objects to be accomplished. This in turn would enable Gold Coast City Council to provide more appropriate assessment and management guidelines based on robust scientific knowledge which will ultimately ensure that the potential environmental and public health impacts resulting from on-site wastewater treatment is minimised. As part of the implementation of suitable management strategies, a critical point monitoring program (CPM) was formulated. This entailed the identification of the key critical parameters that contribute to the characterised risks at monitored locations within the study area. The CPM will allow more direct procedures to be implemented, targeting the specific hazards at sensitive areas throughout Gold Coast region.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
It has been common practice over past property boom and bust cycles in Australia for financial institutions and property owners who have suffered a loss in the property downturn to sue valuers for negligence. Damages claimed are based on the price differential between the valuation at or nearing the peak of the market and the subsequent sale in the market downturn. However, the context of valuers liability has become increasingly complex as a result of statutory reforms introduced in response to the Review of the Law of Negligence Final Report 2002), in particular the introduction of Civil Liability Acts introducing proportionate liability provisions. Legislative reforms have had some positive outcomes for Valuers, however valuers need to continue to maintain high ethical standards, independence and professionalism in valuation practice.
Resumo:
Background: Understanding frequency of foot problems can assist health care planners with resource deployment to new and emerging services such as paediatric podiatry and focus future research on the most salient foot conditions. Methods: A review of 2187 patient consultations during a three month period was conducted. Patient medical and podiatric history was coded using industry standards. All patients were recruited for convenience from a metropolitan university podiatry clinic. Results: 392 new patients were identified with mean age 40.6 years old (range 1–95), with 65% being female. Arthritic diseases, asthma, hypertension and allergies were the most common medical conditions reported. The frequency of new consultations in younger people (n = 102; 27%) exceeded those of the elderly (n = 75; 20%). Conversely, the elderly were nearly three times more prevalent in this cohort (n = 910; 43%) compared to younger people (n = 332; 16%). Conclusion: This study illustrates the diverse nature of pathology seen by podiatrists. Knowledge that skin lesions are highly prevalent is of relevance to health departments, given the aging nature of most populations. Moreover there appears to be a growing trend in the number of young people who present for care, however government funded access to these services are limited.
Resumo:
Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.
Resumo:
The practitioner lawyer of the past had little need to reflect on process. The doctrinal research methodology developed intuitively within the common law — a research method at the core of practice. There was no need to justify or classify it within a broader research framework. Modern academic lawyers are facing a different situation. At a time when competition for limited research funds is becoming more intense, and in which interdisciplinary work is highly valued and non-lawyers are involved in the assessment of grant applications, lawyer-applicants who engage in doctrinal research need to be able to explain their methodology more clearly. Doctrinal scholars need to be more open and articulate about their methods. These methods may be different in different contexts. This paper examines the doctrinal method used in legal research and its place in recent research dialogue. Some commentators are of the view that the doctrinal method is simply scholarship rather than a separate research methodology. Richard Posner even suggests that law is ‘not a field with a distinct methodology, but an amalgam of applied logic, rhetoric, economics and familiarity with a specialized vocabulary and a particular body of texts, practices, and institutions ...’.1 Therefore, academic lawyers are beginning to realise that the doctrinal research methodology needs clarification for those outside the legal profession and that a discussion about the standing and place of doctrinal research compared to other methodologies is required.
Resumo:
Teacher professional standards have become a key policy mechanism for the reform of teaching and education in recent years. While standards policies claim to improve the quality of teaching and learning in schools today, this paper argues that a disjunction exists between the stated intentions of such programmes and the intelligibility of the practices of government in which they are invested. To this effect, the paper conducts an analytics of government of the recently released National Professional Standards for Teachers (Australian Institute for Teaching and School Leadership, 2011) arguing that the explicit, calculated rationality of the programme exists within a wider field of effects. Such analysis has the critical consequence of calling into question the claims of the programmers themselves thus breaching the self-evidence on which the standards rest.
Resumo:
Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.
Resumo:
The aim of this investigation was to elucidate the reductions in muscle, skin and core temperature following exposure to −110°C whole body cryotherapy (WBC), and compare these to 8°C cold water immersion (CWI). Twenty active male subjects were randomly assigned to a 4-min exposure of WBC or CWI. A minimum of 7 days later subjects were exposed to the other treatment. Muscle temperature in the right vastus lateralis (n = 10); thigh skin (average, maximum and minimum) and rectal temperature (n = 10) were recorded before and 60 min after treatment. The greatest reduction (P<0.05) in muscle (mean ± SD; 1 cm: WBC, 1.6±1.2°C; CWI, 2.0±1.0°C; 2 cm: WBC, 1.2±0.7°C; CWI, 1.7±0.9°C; 3 cm: WBC, 1.6±0.6°C; CWI, 1.7±0.5°C) and rectal temperature (WBC, 0.3±0.2°C; CWI, 0.4±0.2°C) were observed 60 min after treatment. The largest reductions in average (WBC, 12.1±1.0°C; CWI, 8.4±0.7°C), minimum (WBC, 13.2±1.4°C; CWI, 8.7±0.7°C) and maximum (WBC, 8.8±2.0°C; CWI, 7.2±1.9°C) skin temperature occurred immediately after both CWI and WBC (P<0.05). Skin temperature was significantly lower (P<0.05) immediately after WBC compared to CWI. The present study demonstrates that a single WBC exposure decreases muscle and core temperature to a similar level of those experienced after CWI. Although both treatments significantly reduced skin temperature, WBC elicited a greater decrease compared to CWI. These data may provide information to clinicians and researchers attempting to optimise WBC and CWI protocols in a clinical or sporting setting.