953 resultados para Annotation Tag
Resumo:
In this paper, a loss reduction planning in electric distribution networks is presented based on the successful experiences in distribution utilities of IRAN and some developed countries. The necessary technical and economical parameters of planning are calculated from related projects in IRAN. Cost, time, and benefits of every sub-program including seven loss reduction approaches are determined. Finally, the loss reduction program, the benefit per cost, and the return of investment in optimistic and pessimistic conditions are introduced.
Resumo:
This paper presents a new method to determine feeder reconfiguration scheme considering variable load profile. The objective function consists of system losses, reliability costs and also switching costs. In order to achieve an optimal solution the proposed method compares these costs dynamically and determines when and how it is reasonable to have a switching operation. The proposed method divides a year into several equal time periods, then using particle swarm optimization (PSO), optimal candidate configurations for each period are obtained. System losses and customer interruption cost of each configuration during each period is also calculated. Then, considering switching cost from a configuration to another one, dynamic programming algorithm (DPA) is used to determine the annual reconfiguration scheme. Several test systems were used to validate the proposed method. The obtained results denote that to have an optimum solution it is necessary to compare operation costs dynamically.
Resumo:
This paper presents an efficient hybrid evolutionary optimization algorithm based on combining Ant Colony Optimization (ACO) and Simulated Annealing (SA), called ACO-SA, for distribution feeder reconfiguration (DFR) considering Distributed Generators (DGs). Due to private ownership of DGs, a cost based compensation method is used to encourage DGs in active and reactive power generation. The objective function is summation of electrical energy generated by DGs and substation bus (main bus) in the next day. The approach is tested on a real distribution feeder. The simulation results show that the proposed evolutionary optimization algorithm is robust and suitable for solving DFR problem.
Resumo:
Gas phase peroxyl radicals are central to our chemical understanding of combustion and atmospheric processes and are typically characterized by strong absorption in the UV (lambda(max) approximate to 240 nm). The analogous maximum absorption feature for arylperoxyl radicals is predicted to shift to the visible but has not previously been characterized nor have any photoproducts arising from this transition been identified. Here we describe the controlled synthesis and isolation in vacuo of an array of charge-substituted phenylperoxyl radicals at room temperature, including the 4-(N,N,N-trimethylammonium)methyl phenylperoxyl radical cation (4-Me3N[+]CH2-C6H4OO center dot), using linear ion-trap mass spectrometry. Photodissociation mass spectra obtained at wavelengths ranging from 310 to 500 nm reveal two major photoproduct channels corresponding to homolysis of aryl-OO and arylO-O bonds resulting in loss of O-2 and O, respectively. Combining the photodissociation yields across this spectral window produces a broad (FWHM approximate to 60 nm) but clearly resolved feature centered at lambda(max) = 403 nm (3.08 eV). The influence of the charge-tag identity and its proximity to the radical site are investigated and demonstrate no effect on the identity of the two dominant photoproduct channels. Electronic structure calculations have located the vertical (B) over tilde <- (X) over tilde transition of these substituted phenylperoxyl radicals within the experimental uncertainty and further predict the analogous transition for unsubstituted phenylperoxyl radical (C6H5OO center dot) to be 457 nm (2.71 eV), nearly 45 nm shorter than previous estimates and in good agreement with recent computational values.
Resumo:
A Remote Sensing Core Curriculum (RSCC) development project is currently underway. This project is being conducted under the auspices of the National Center for Geographic Information and Analysis (NCGIA). RSCC is an outgrowth of the NCGIA GIS Core Curriculum project. It grew out of discussions begun at NCGIA, Initiative 12 (I-12): 'Integration of Remote Sensing and Geographic Information Systems'. This curriculum development project focuses on providing professors, teachers and instructors in undergraduate and graduate institutions with course materials from experts in specific subject matter for areas use in the class room.
Resumo:
Next Generation Sequencing (NGS) has revolutionised molecular biology, resulting in an explosion of data sets and an increasing role in clinical practice. Such applications necessarily require rapid identification of the organism as a prelude to annotation and further analysis. NGS data consist of a substantial number of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. Highly accurate results have been obtained for restricted sets using SVM classifiers, but such methods are difficult to parallelise and success depends on careful attention to feature selection. This work examines the problem at very large scale, using a mix of synthetic and real data with a view to determining the overall structure of the problem and the effectiveness of parallel ensembles of simpler classifiers (principally random forests) in addressing the challenges of large scale genomics.
Resumo:
This paper gives an overview of the INEX 2008 Ad Hoc Track. The main goals of the Ad Hoc Track were two-fold. The first goal was to investigate the value of the internal document structure (as provided by the XML mark-up) for retrieving relevant information. This is a continuation of INEX 2007 and, for this reason, the retrieval results are liberalized to arbitrary passages and measures were chosen to fairly compare systems retrieving elements, ranges of elements, and arbitrary passages. The second goal was to compare focused retrieval to article retrieval more directly than in earlier years. For this reason, standard document retrieval rankings have been derived from all runs, and evaluated with standard measures. In addition, a set of queries targeting Wikipedia have been derived from a proxy log, and the runs are also evaluated against the clicked Wikipedia pages. The INEX 2008 Ad Hoc Track featured three tasks: For the Focused Task a ranked-list of nonoverlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the results for the three tasks, and examine the relative effectiveness of element and passage retrieval. This is examined in the context of content only (CO, or Keyword) search as well as content and structure (CAS, or structured) search. Finally, we look at the ability of focused retrieval techniques to rank articles, using standard document retrieval techniques, both against the judged topics as well as against queries and clicks from a proxy log.
Resumo:
The aim of this study was to use lipidomics to determine if the lipid composition of apolipoprotein-B-containing lipoproteins is modified by dyslipidaemia in type 2 diabetes and if any of the identified changes potentially have biological relevance in the pathophysiology of type 2 diabetes. VLDL and LDL from normolipidaemic and dyslipidaemic type 2 diabetic women and controls were isolated and quantified with HPLC and mass spectrometry. A detailed molecular characterisation of VLDL triacylglycerols (TAG) was also performed using the novel ozone-induced dissociation method, which allowed us to distinguish vaccenic acid (C18:1 n-7) from oleic acid (C18:1 n-9) in specific TAG species. Lipid class composition was very similar in VLDL and LDL from normolipidaemic type 2 diabetic and control participants. By contrast, dyslipidaemia was associated with significant changes in both lipid classes (e.g. increased diacylglycerols) and lipid species (e.g. increased C16:1 and C20:3 in phosphatidylcholine and cholesteryl ester and increased C16:0 [palmitic acid] and vaccenic acid in TAG). Levels of palmitic acid in VLDL and LDL TAG correlated with insulin resistance, and VLDL TAG enriched in palmitic acid promoted increased secretion of proinflammatory mediators from human smooth muscle cells. We showed that dyslipidaemia is associated with major changes in both lipid class and lipid species composition in VLDL and LDL from women with type 2 diabetes. In addition, we identified specific molecular lipid species that both correlate with clinical variables and are proinflammatory. Our study thus shows the potential of advanced lipidomic methods to further understand the pathophysiology of type 2 diabetes.
Resumo:
The morphology of plasmonic nano-assemblies has a direct influence on optical properties, such as localised surface plasmon resonance (LSPR) and surface enhanced Raman scattering (SERS) intensity. Assemblies with core-satellite morphologies are of particular interest, because this morphology has a high density of hot-spots, while constraining the overall size. Herein, a simple method is reported for the self-assembly of gold NPs nano-assemblies with a core-satellite morphology, which was mediated by hyperbranched polymer (HBP) linkers. The HBP linkers have repeat units that do not interact strongly with gold NPs, but have multiple end-groups that specifically interact with the gold NPs and act as anchoring points resulting in nano-assemblies with a large (~48 nm) core surrounded by smaller (~15 nm) satellites. It was possible to control the number of satellites in an assembly which allowed optical parameters such as SPR maxima and the SERS intensity to be tuned. These results were found to be consistent with finite-difference time domain (FDTD) simulations. Furthermore, the multiplexing of the nano-assemblies with a series of Raman tag molecules was demonstrated, without an observable signal arising from the HBP linker after tagging. Such plasmonic nano-assemblies could potentially serve as efficient SERS based diagnostics or biomedical imaging agents in nanomedicine.
Resumo:
The ability of cloud computing to provide almost unlimited storage, backup and recovery, and quick deployment contributes to its widespread attention and implementation. Cloud computing has also become an attractive choice for mobile users as well. Due to limited features of mobile devices such as power scarcity and inability to cater computationintensive tasks, selected computation needs to be outsourced to the resourceful cloud servers. However, there are many challenges which need to be addressed in computation offloading for mobile cloud computing such as communication cost, connectivity maintenance and incurred latency. This paper presents taxonomy of the computation offloading approaches which aim to address the challenges. The taxonomy provides guidelines to identify research scopes in computation offloading for mobile cloud computing. We also outline directions and anticipated trends for future research.
Resumo:
Molecular biology is a scientific discipline which has changed fundamentally in character over the past decade to rely on large scale datasets – public and locally generated - and their computational analysis and annotation. Undergraduate education of biologists must increasingly couple this domain context with a data-driven computational scientific method. Yet modern programming and scripting languages and rich computational environments such as R and MATLAB present significant barriers to those with limited exposure to computer science, and may require substantial tutorial assistance over an extended period if progress is to be made. In this paper we report our experience of undergraduate bioinformatics education using the familiar, ubiquitous spreadsheet environment of Microsoft Excel. We describe a configurable extension called QUT.Bio.Excel, a custom ribbon, supporting a rich set of data sources, external tools and interactive processing within the spreadsheet, and a range of problems to demonstrate its utility and success in addressing the needs of students over their studies.
Resumo:
Social media is playing an ever-increasing role in both viewers engagement with television and in the television industries evaluation of programming, in Australia – which is the focus of our study - and beyond. Twitter hashtags and viewer comments are increasingly incorporated into broadcasts, while Facebook fan pages provide a means of marketing upcoming shows and television personalities directly into the social media feed of millions of users. Additionally, bespoke applications such as FanGo and ZeeBox, which interact with the mainstream social networks, are increasingly being utilized by broadcasters for interactive elements of programming (c.f. Harrington, Highfield and Bruns, 2012). However, both the academic and industry study of these platforms has focused on the measure of content during the specific broadcast of the show, or a period surrounding it (e.g. 3 hours before until 3 am the next day, in the case of 2013 Nielsen SocialGuide reports). In this paper, we argue that this focus ignores a significant period for both television producers and advertisers; the lead-up to the program. If, as we argue elsewhere (Bruns, Woodford, Highfield & Prowd, forthcoming), users are persuaded to engage with content both by advertising of the Twitter hash-tag or Facebook page and by observing their network connections engaging with such content, the period before and between shows may have a significant impact on a viewers likelihood to watch a show. The significance of this period for broadcasters is clearly highlighted by the efforts they afford to advertising forthcoming shows through several channels, including television and social media, but also more widely. Biltereyst (2004, p.123) has argued that reality television generates controversy to receive media attention, and our previous small-scale work on reality shows during 2013 and 2014 supports the theory that promoting controversial behavior is likely to lead to increased viewing (Woodford & Prowd, 2014a). It remains unclear, however, to what extent this applies to other television genres. Similarly, while networks use of social media has been increasing, best practices remain unclear. Thus, by applying our telemetrics, that is social media metrics for television based on sabermetric approaches (Woodford, Prowd & Bruns, forthcoming; c.f. Woodford & Prowd, 2014b), to the period between shows, we are able to better understand the period when key viewing decisions may be made, to establish the significance of observing discussions within your network during the period between shows, and identify best practice examples of promoting a show using social media.
Resumo:
Over the past decade the mitochondrial (mt) genome has become the most widely used genomic resource available for systematic entomology. While the availability of other types of ‘–omics’ data – in particular transcriptomes – is increasing rapidly, mt genomes are still vastly cheaper to sequence and are far less demanding of high quality templates. Furthermore, almost all other ‘–omics’ approaches also sequence the mt genome, and so it can form a bridge between legacy and contemporary datasets. Mitochondrial genomes have now been sequenced for all insect orders, and in many instances representatives of each major lineage within orders (suborders, series or superfamilies depending on the group). They have also been applied to systematic questions at all taxonomic scales from resolving interordinal relationships (e.g. Cameron et al., 2009; Wan et al., 2012; Wang et al., 2012), through many intraordinal (e.g. Dowton et al., 2009; Timmermans et al., 2010; Zhao et al. 2013a) and family-level studies (e.g. Nelson et al., 2012; Zhao et al., 2013b) to population/biogeographic studies (e.g. Ma et al., 2012). Methodological issues around the use of mt genomes in insect phylogenetic analyses and the empirical results found to date have recently been reviewed by Cameron (2014), yet the technical aspects of sequencing and annotating mt genomes were not covered. Most papers which generate new mt genome report their methods in a simplified form which can be difficult to replicate without specific knowledge of the field. Published studies utilize a sufficiently wide range of approaches, usually without justification for the one chosen, that confusion about commonly used jargon such as ‘long PCR’ and ‘primer walking’ could be a serious barrier to entry. Furthermore, sequenced mt genomes have been annotated (gene locations defined) to wildly varying standards and improving data quality through consistent annotation procedures will benefit all downstream users of these datasets. The aims of this review are therefore to: 1. Describe in detail the various sequencing methods used on insect mt genomes; 2. Explore the strengths/weakness of different approaches; 3. Outline the procedures and software used for insect mt genome annotation, and; 4. Highlight quality control steps used for new annotations, and to improve the re-annotation of previously sequenced mt genomes used in systematic or comparative research.
Resumo:
This paper provides a preliminary analysis of an autonomous uncooperative collision avoidance strategy for unmanned aircraft using image-based visual control. Assuming target detection, the approach consists of three parts. First, a novel decision strategy is used to determine appropriate reference image features to track for safe avoidance. This is achieved by considering the current rules of the air (regulations), the properties of spiral motion and the expected visual tracking errors. Second, a spherical visual predictive control (VPC) scheme is used to guide the aircraft along a safe spiral-like trajectory about the object. Lastly, a stopping decision based on thresholding a cost function is used to determine when to stop the avoidance behaviour. The approach does not require estimation of range or time to collision, and instead relies on tuning two mutually exclusive decision thresholds to ensure satisfactory performance.
Resumo:
Next Generation Sequencing (NGS) has revolutionised molecular biology, resulting in an explosion of data sets and an increasing role in clinical practice. Such applications necessarily require rapid identification of the organism as a prelude to annotation and further analysis. NGS data consist of a substantial number of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. Highly accurate results have been obtained for restricted sets using SVM classifiers, but such methods are difficult to parallelise and success depends on careful attention to feature selection. This work examines the problem at very large scale, using a mix of synthetic and real data with a view to determining the overall structure of the problem and the effectiveness of parallel ensembles of simpler classifiers (principally random forests) in addressing the challenges of large scale genomics.