821 resultados para Dual priority scheduling
Resumo:
Formal methods provide a means of reasoning about computer programs in order to prove correctness criteria. One subtype of formal methods is based on the weakest precondition predicate transformer semantics and uses guarded commands as the basic modelling construct. Examples of such formalisms are Action Systems and Event-B. Guarded commands can intuitively be understood as actions that may be triggered when an associated guard condition holds. Guarded commands whose guards hold are nondeterministically chosen for execution, but no further control flow is present by default. Such a modelling approach is convenient for proving correctness, and the Refinement Calculus allows for a stepwise development method. It also has a parallel interpretation facilitating development of concurrent software, and it is suitable for describing event-driven scenarios. However, for many application areas, the execution paradigm traditionally used comprises more explicit control flow, which constitutes an obstacle for using the above mentioned formal methods. In this thesis, we study how guarded command based modelling approaches can be conveniently and efficiently scheduled in different scenarios. We first focus on the modelling of trust for transactions in a social networking setting. Due to the event-based nature of the scenario, the use of guarded commands turns out to be relatively straightforward. We continue by studying modelling of concurrent software, with particular focus on compute-intensive scenarios. We go from theoretical considerations to the feasibility of implementation by evaluating the performance and scalability of executing a case study model in parallel using automatic scheduling performed by a dedicated scheduler. Finally, we propose a more explicit and non-centralised approach in which the flow of each task is controlled by a schedule of its own. The schedules are expressed in a dedicated scheduling language, and patterns assist the developer in proving correctness of the scheduled model with respect to the original one.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.
Resumo:
Within the framework of the working memory model proposed by A. Baddeley and G. Hitch, a dual-task paradigm has been suggested to evaluate the capacity to perform simultaneously two concurrent tasks. This capacity is assumed to reflect the functioning of the central executive component, which appears to be impaired in patients with dysexecutive syndrome. The present study extends the investigation of an index ("mu"), which is supposed to indicate the capacity of coordination of concurrent auditory digit span and tracking tasks, by testing the influence of training on the performance in the dual task. The presentation of the same digit sequence lists or always-different lists did not differently affect the performance. The span length affected the mu values. The improved performance in the tasks under the dual condition closely resembled the improvement in the single-task performance. So, although training improved performance in the single and dual conditions, especially for the tracking component, the mu values remained stable throughout the sessions when the single tasks were performed first. Conversely, training improved the capacity of dual-task coordination throughout the sessions when dual task was performed first, addressing the issue of the contribution of the within-session practice to the mu index.
Resumo:
The measure "mu", proposed as an index of the ability to coordinate concurrent box-crossing (BC) and digit-span (DS) tasks in the dual task (DT), should reflect the capacity of the executive component of the working memory system. We investigated the effect of practice in BC and of a change in the digit span on mu by adding previous practice trials in BC and diminishing, maintaining or increasing the digit sequence length. The mu behavior was evaluated throughout three trials of the test. Reported strategies in digit tasks were also analyzed. Subjects with diminished span showed the best performance in DT due to a stable performance in DS and BC in the single- and dual-task conditions. These subjects also showed a more stable performance throughout trials. Subjects with diminished span tended to employ effortless strategies, whereas subjects with increased span employed effort-requiring strategies and showed the lowest means of mu. Subjects with initial practice trials showed the best performance in BC and the most differentiated performance between the single- and dual-task conditions in BC. The correlation coefficient between the mu values obtained in the first and second trials was 0.814 for subjects with diminished span and practice trials in BC. It seems that the within-session practice in BC and the performance variability in DS affect the reliability of the index mu. To control these factors we propose the introduction of previous practice trials in BC and a modification of the current method to determine the digit sequence length. This proposal should contribute to the development of a more reliable method to evaluate the executive capacity of coordination in the dual-task paradigm.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
The removal of organics from copper electrolyte solutions after solvent extraction by dual media filtration is one of the most efficient ways to ensure the clean electrolyte flow into the electrowinning. The clean electrolyte will ensure the good quality cathode plate production. Dual media filtration uses two layers of filter media for filtration as anthracite and garnet respectively. The anthracite layer will help the coalescing of the entrained organic droplets which will then float to the top of the filter, and back to the solvent extraction process. The garnet layer will catch any solids left in the electrolyte traveling through the filter media. This thesis will concentrate on characterization of five different anthracites in order to find some differences using specific surface area analysis, particle size analysis, and morphology analysis. These results are compared to the pressure loss values obtained from lab column tests and bed expansion behavior. The goal of the thesis was to find out if there were any differences in the anthracite which would make the one perform better than the other. There were no big differences found on any aspect of the particle characterization, but some found differences should be further studied in order to confirm the meaning of the porosity, surface area, intensity mean and intensity SD (Standard Deviation) on anthracites and their use in dual media filtration. The thesis work analyzed anthracite samples the way that is not found on any public literature sources, and further studies on the issue would bring more knowledge to the electrolyte process.
Resumo:
Operational excellence of individual tramp shipping companies is important in today’s market, where competition is intense, freight revenues are modest and capital costs high due to global financial crisis, and tighter regulatory framework is generating additional costs and challenges to the industry. This thesis concentrates on tramp shipping, where a tramp operator in a form of an individual case company, specialized in short-sea shipping activities in the Baltic Sea region, is searching ways to map their current fleet operations and better understand potential ways to improve the overall routing and scheduling decisions. The research problem is related to tramp fleet planning where several cargoes are carried on board at the same time, which are here systematically referred to as part cargoes. The purpose is to determine the pivotal dimensions and characteristics of these part cargo operations in tramp shipping, and offer both the individual case company and wider research community better understanding of potential risks and benefits related to utilization of part cargo operations. A mixed method research approach is utilized in this research, as the objectives are related to complex, real-life business practices in the field of supply chain management and more specifically, maritime logistics. A quantitative analysis of different voyage scenarios is executed, including alternative voyage legs with varying cost structure and customer involvement. An on-line-based questionnaire designed and prepared by case company’s decision group again provides desired data of predominant attitudes and views of most important industrial customers regarding the part cargo-related operations and potential future utilization of this business model. The results gained from these quantitative methods are complied with qualitative data collection tools, along with suitable secondary data sources. Based on results and logical analysis of different data sources, a framework for characterizing the different aspects of part cargo operations is developed, utilizing both existing research and empirical investigation of the phenomenon. As conclusions, part cargoes have the ability to be part of viable fleet operations, and even increase flexibility among the fleet to a certain extent. Naturally, several hinderers for this development is recognized as well, such as potential issues with information gathering and sharing, inefficient port activities, and increased transit times.
Resumo:
The present study examined the bullying experiences of a group of students, age 10-14 years, identified as having behaviour problems. A total often students participated in a series of mixed methodology activities, including self-report questionnaires, story telling exercises, and interview style joumaling. The main research questions were related to the prevalence of bully/victims and the type of bullying experiences in this population. Questionnaires gathered information about their involvement in bullying, as well as about psychological risk factors including normative beliefs about antisocial acts, impulsivity, problem solving, and coping strategies. Journal questions expanded on these themes and allowed students to explain their personal experiences as bullies and victims as well as provide suggestions for intervention. The overall results indicated that all of the ten students in this sample have participated in bullying as both a bully and a victim. This high prevalence of bully/victim involvement in students from behavioural classrooms is in sharp contrast with the general population where the prevalence is about 33%. In addition, a common thread was found that indicated that these students who participated in this study demonstrate characteristics of emotionally dysregulated reactive bullies. Theoretical implication and educational practices are discussed.
Resumo:
Several irrigation treatments were evaluated on Sovereign Coronation table grapes at two sites over a 3-year period in the cool humid Niagara Peninsula of Ontario. Trials were conducted in the Hippie (Beamsville, ON) and the Lambert Vineyards (Niagara-on-the-Lake, ON) in 2003 to 2005 with the objective of assessing the usefulness of the modified Penman-Monteith equation to accurately schedule vine irrigation needs. Data (relative humidity, windspeed, solar radiation, and temperature) required to precisely calculate evapotranspiration (ETq) were downloaded from the Ontario Weather Network. One of two ETq values (either 100 or 150%) were used in combination with one of two crop coefficients (Kc; either fixed at 0.75 or 0.2 to 0.8 based upon increasing canopy volume) to calculate the amount of irrigation water required. Five irrigation treatments were: un irrigated control; (lOOET) X Kc =0.75; 150ET X Kc =0.75; lOOET X Kc =0.2-0.8; 150ET X Kc =0.2-0.8. Transpiration, water potential (v|/), and soil moisture data were collected each growing seasons. Yield component data was collected and berries from each treatment were analyzed for soluble solids (Brix), pH, titratable acidity (TA), anthocyanins, methyl anthranilate (MA), and total volatile esters (TVE). Irrigation showed a substantial positive effect on transpiration rate and soil moisture; the control treatment showed consistently lower transpiration and soil moisture over the 3 seasons. Transpiration appeared accurately reflect Sovereign Coronation grapevines water status. Soil moisture also accurately reflected level of irrigation. Moreover, irrigation showed impact of leaf \|/, which was more negative throughout the 3 seasons for vines that were not irrigated. Irrigation had a substantial positive effect on yield (kg/vine) and its various components (clusters/vine, cluster weight, and berries/cluster) in 2003 and 2005. Berry weights were higher under the irrigated treatments at both sites. Berry weight consistently appeared to be the main factor leading to these increased yields, as inconsistent responses were noted for some yield variables. Soluble solids was highest under the ET150 and ET100 treatments both with Kc at 0.75. Both pH and TA were highest under control treatments in 2003 and 2004, but highest under irrigated treatments in 2005. Anthocyanins and phenols were highest under the control treatments in 2003 and 2004, but highest under irrigated treatments in 2005. MA and TVE were highest under the ET150 treatments. Vine and soil water status measurements (soil moisture, leaf \|/, and transpiration) confirmed that irrigation was required for the summers of 2003 and 2005 due to dry weather in those years. They also partially supported the hypothesis that the Penman-Monteith equation is useful for calculating vineyard water needs. Both ET treatments gave clear evidence that irrigation could be effective in reducing water stress and for improving vine performance, yield and fruit composition. Use of properly scheduled irrigation was beneficial for Sovereign Coronation table grapes in the Niagara region. Findings herein should give growers some strong guidehnes on when, how and how much to irrigate their vineyards.
Resumo:
This qualitative study explored secondary teachers' perceptions of scheduling in relation to pedagogy, curriculum, and observation of student learning. Its objective was to determine the best way to organize the scheduling for the delivery of Ontario's new 4-year curriculum. Six participants were chosen. Two were teaching in a semestered timetable, 1 in a traditional timetable, and 3 had experience in both schedules. Participants related a pressure cooker "lived experience" with weaker students in the semester system experiencing a particularly harsh environment. The inadequate amount of time for review in content-heavy courses, gap scheduling problems, catch-up difficulties for students missing classes, and the fast pace of semestering are identified as factors negatively impacting on these students. Government testing adds to the pressure by shifting teachers' time and attention in the classroom from deeper learning to a superficial coverage of material, from curriculum as lived to curriculum as text to be covered. Scheduling choice should be available in public education to accommodate the needs of all students. Curriculum guidelines need to be revamped to reflect the content that teachers believe is necessary for a successful course delivery. Applied level courses need to be developed for students who are not academically inferior but learn differently.
Resumo:
Self-dual doubly even linear binary error-correcting codes, often referred to as Type II codes, are codes closely related to many combinatorial structures such as 5-designs. Extremal codes are codes that have the largest possible minimum distance for a given length and dimension. The existence of an extremal (72,36,16) Type II code is still open. Previous results show that the automorphism group of a putative code C with the aforementioned properties has order 5 or dividing 24. In this work, we present a method and the results of an exhaustive search showing that such a code C cannot admit an automorphism group Z6. In addition, we present so far unpublished construction of the extended Golay code by P. Becker. We generalize the notion and provide example of another Type II code that can be obtained in this fashion. Consequently, we relate Becker's construction to the construction of binary Type II codes from codes over GF(2^r) via the Gray map.
Resumo:
TGA2 is a dual-function Systemic Acquired Resistance (SAR) transcription factor involved in the activation and repression of pathogenesis-related (PR) genes. Recent studies have shown that TGA2 is able to switch from a basal repressor to activator, likely, through regulatory control from its N-terminus. The N-terminus has also been shown to affect DNA binding of the TGA2 bZIP domain when phosphorylated by Casein Kinase II (CK2). The mechanisms involved for directing a switch from basal repressor to activator, and the role of kinase activity, have not previously been looked at in detail. This study provides evidence for the involvement of a CK2-like kinase in the switch of TGA2 activity from repressor to activator, by regulating the DNA-binding activity of TGA2 by phosphorylating residues in the N terminus of the protein.
Resumo:
Two classes of compounds have been prepared and characterized as building blocks for chiral magnets and ferromagnetic conductors. In the fIrst project, the organic framework of a pentadentate, (N302) macro cycle has been synthetically modifIed to introduce phenyl substituents into its organic framework and the synthesis of four new [Fe(In(N302)(CN)2] complexes (I) - (IV) is presented. [Molecular diagram availble in pdf] This work represents the fIrst structural and magnetic studies of a family of spin crossover macrocycles that comprise of both structural and stereo-isomers. Magnetic susceptibility and Mossbauer data for the R,R-complex (I) is consistent with both a thermal and a light induced spin crossover transition. The X-ray data supports a change in geometry accompanying the thermal spin transition, from a high spin (HS) 7 -coordinate complex at room temperature to a low spin (LS) 5-coordinate complex at 100 K. The crystal structure ofthe racemic complex (III) reveals a HS, 7-coordinate complex at 200 K that undergoes no signifIcant structural changes on cooling. In contrast, the magnetic - susceptibility and Mossbauer data collected on a powder sample of the racemic complex are consistent with a LS complex. Finally, the meso complex (IV) was prepared and its structure and magnetic properties are consistent with a 5-coordinate LS complex that remains low spin, but undergoes conformational changes on cooling in solution. The chiral [Fe(H)(N302)(CN)2] macro cycle (I), together with its Mn(H) and Fe(H) derivatives have also been exploited as building blocks for the self-assembly of chiral magnets. In the second project, a synthetic route for the preparation of tetrathiafulvalene (TTF) donors covalently attached to a diisopropyl verdazyl radical via a cross conjugated pyridyl linker IS presented. Following this strategy, four new TTF-py- (diisopropyl)verdazyl radicals have been prepared and characterized (V) - (VIII) . [Molecular diagram available in pdf] The first (2:1) charge transfer complex ofa TTF-py-(diisopropyl)verdazyl radical donor and a TCNQ acceptor has been prepared and structurally characterized. The crystal packing shows that the donor and acceptor molecules are organized in a mixed stacking arrangement consistent with its insulating behaviour. EPR and magnetic susceptibility data support intramolecular ferromagnetic interactions between the TTF and the verdazyl radicals and antiferromagnetic interactions between TTF donors within a stack. In an attempt to increase the intramolecular exchange interaction between the two radicals, a TTF-x-(diisopropyl)verdazyl radical (IX) was prepared, where the two radicals are connected ia a conjugated divinylene linker. The neutral radical donors stack in a more favourable head-to-head arrangement but the bulky isopropyl groups prevent the donor radicals from stacking close enough together to facilitate good orbital overlap. [Molecular diagram available in pdf].