921 resultados para input-output analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is known already from 1970´s that laser beam is suitable for processing paper materials. In this thesis, term paper materials mean all wood-fibre based materials, like dried pulp, copy paper, newspaper, cardboard, corrugated board, tissue paper etc. Accordingly, laser processing in this thesis means all laser treatments resulting material removal, like cutting, partial cutting, marking, creasing, perforation etc. that can be used to process paper materials. Laser technology provides many advantages for processing of paper materials: non-contact method, freedom of processing geometry, reliable technology for non-stop production etc. Especially packaging industry is very promising area for laser processing applications. However, there are only few industrial laser processing applications worldwide even in beginning of 2010´s. One reason for small-scale use of lasers in paper material manufacturing is that there is a shortage of published research and scientific articles. Another problem, restraining the use of laser for processing of paper materials, is colouration of paper material i.e. the yellowish and/or greyish colour of cut edge appearing during cutting or after cutting. These are the main reasons for selecting the topic of this thesis to concern characterization of interaction of laser beam and paper materials. This study was carried out in Laboratory of Laser Processing at Lappeenranta University of Technology (Finland). Laser equipment used in this study was TRUMPF TLF 2700 carbon dioxide laser that produces a beam with wavelength of 10.6 μm with power range of 190-2500 W (laser power on work piece). Study of laser beam and paper material interaction was carried out by treating dried kraft pulp (grammage of 67 g m-2) with different laser power levels, focal plane postion settings and interaction times. Interaction between laser beam and dried kraft pulp was detected with different monitoring devices, i.e. spectrometer, pyrometer and active illumination imaging system. This way it was possible to create an input and output parameter diagram and to study the effects of input and output parameters in this thesis. When interaction phenomena are understood also process development can be carried out and even new innovations developed. Fulfilling the lack of information on interaction phenomena can assist in the way of lasers for wider use of technology in paper making and converting industry. It was concluded in this thesis that interaction of laser beam and paper material has two mechanisms that are dependent on focal plane position range. Assumed interaction mechanism B appears in range of average focal plane position of 3.4 mm and 2.4 mm and assumed interaction mechanism A in range of average focal plane position of 0.4 mm and -0.6 mm both in used experimental set up. Focal plane position 1.4 mm represents midzone of these two mechanisms. Holes during laser beam and paper material interaction are formed gradually: first small hole is formed to interaction area in the centre of laser beam cross-section and after that, as function of interaction time, hole expands, until interaction between laser beam and dried kraft pulp is ended. By the image analysis it can be seen that in beginning of laser beam and dried kraft pulp material interaction small holes off very good quality are formed. It is obvious that black colour and heat affected zone appear as function of interaction time. This reveals that there still are different interaction phases within interaction mechanisms A and B. These interaction phases appear as function of time and also as function of peak intensity of laser beam. Limit peak intensity is the value that divides interaction mechanism A and B from one-phase interaction into dual-phase interaction. So all peak intensity values under limit peak intensity belong to MAOM (interaction mechanism A one-phase mode) or to MBOM (interaction mechanism B onephase mode) and values over that belong to MADM (interaction mechanism A dual-phase mode) or to MBDM (interaction mechanism B dual-phase mode). Decomposition process of cellulose is evolution of hydrocarbons when temperature is between 380- 500°C. This means that long cellulose molecule is split into smaller volatile hydrocarbons in this temperature range. As temperature increases, decomposition process of cellulose molecule changes. In range of 700-900°C, cellulose molecule is mainly decomposed into H2 gas; this is why this range is called evolution of hydrogen. Interaction in this range starts (as in range of MAOM and MBOM), when a small good quality hole is formed. This is due to “direct evaporation” of pulp via decomposition process of evolution of hydrogen. And this can be seen can be seen in spectrometer as high intensity peak of yellow light (in range of 588-589 nm) which refers to temperature of ~1750ºC. Pyrometer does not detect this high intensity peak since it is not able to detect physical phase change from solid kraft pulp to gaseous compounds. As interaction time between laser beam and dried kraft pulp continues, hypothesis is that three auto ignition processes occurs. Auto ignition of substance is the lowest temperature in which it will spontaneously ignite in a normal atmosphere without an external source of ignition, such as a flame or spark. Three auto ignition processes appears in range of MADM and MBDM, namely: 1. temperature of auto ignition of hydrogen atom (H2) is 500ºC, 2. temperature of auto ignition of carbon monoxide molecule (CO) is 609ºC and 3. temperature of auto ignition of carbon atom (C) is 700ºC. These three auto ignition processes leads to formation of plasma plume which has strong emission of radiation in range of visible light. Formation of this plasma plume can be seen as increase of intensity in wavelength range of ~475-652 nm. Pyrometer shows maximum temperature just after this ignition. This plasma plume is assumed to scatter laser beam so that it interacts with larger area of dried kraft pulp than what is actual area of beam cross-section. This assumed scattering reduces also peak intensity. So result shows that assumably scattered light with low peak intensity is interacting with large area of hole edges and due to low peak intensity this interaction happens in low temperature. So interaction between laser beam and dried kraft pulp turns from evolution of hydrogen to evolution of hydrocarbons. This leads to black colour of hole edges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes the methodology, basic procedures and instrumental employed by the Solar Energy Laboratory at Universidade Federal do Rio Grande do Sul for the determination of current-voltage characteristic curves of photovoltaic modules. According to this methodology, I-V characteristic curves were acquired for several modules under diverse conditions. The main electrical parameters were determined and the temperature and irradiance influence on photovoltaic modules performance was quantified. It was observed that most of the tested modules presented output power values considerably lower than those specified by the manufacturers. The described hardware allows the testing of modules with open-circuit voltage up to 50 V and short-circuit current up to 8 A.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind power is a low-carbon energy production form that reduces the dependence of society on fossil fuels. Finland has adopted wind energy production into its climate change mitigation policy, and that has lead to changes in legislation, guidelines, regional wind power areas allocation and establishing a feed-in tariff. Wind power production has indeed boosted in Finland after two decades of relatively slow growth, for instance from 2010 to 2011 wind energy production increased with 64 %, but there is still a long way to the national goal of 6 TWh by 2020. This thesis introduces a GIS-based decision-support methodology for the preliminary identification of suitable areas for wind energy production including estimation of their level of risk. The goal of this study was to define the least risky places for wind energy development within Kemiönsaari municipality in Southwest Finland. Spatial multicriteria decision analysis (SMCDA) has been used for searching suitable wind power areas along with many other location-allocation problems. SMCDA scrutinizes complex ill-structured decision problems in GIS environment using constraints and evaluation criteria, which are aggregated using weighted linear combination (WLC). Weights for the evaluation criteria were acquired using analytic hierarchy process (AHP) with nine expert interviews. Subsequently, feasible alternatives were ranked in order to provide a recommendation and finally, a sensitivity analysis was conducted for the determination of recommendation robustness. The first study aim was to scrutinize the suitability and necessity of existing data for this SMCDA study. Most of the available data sets were of sufficient resolution and quality. Input data necessity was evaluated qualitatively for each data set based on e.g. constraint coverage and attribute weights. Attribute quality was estimated mainly qualitatively by attribute comprehensiveness, operationality, measurability, completeness, decomposability, minimality and redundancy. The most significant quality issue was redundancy as interdependencies are not tolerated by WLC and AHP does not include measures to detect them. The third aim was to define the least risky areas for wind power development within the study area. The two highest ranking areas were Nordanå-Lövböle and Påvalsby followed by Helgeboda, Degerdal, Pungböle, Björkboda, and Östanå-Labböle. The fourth aim was to assess the recommendation reliability, and the top-ranking two areas proved robust whereas the other ones were more sensitive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral thesis introduces an improved control principle for active du/dt output filtering in variable-speed AC drives, together with performance comparisons with previous filtering methods. The effects of power semiconductor nonlinearities on the output filtering performance are investigated. The nonlinearities include the timing deviation and the voltage pulse waveform distortion in the variable-speed AC drive output bridge. Active du/dt output filtering (ADUDT) is a method to mitigate motor overvoltages in variable-speed AC drives with long motor cables. It is a quite recent addition to the du/dt reduction methods available. This thesis improves on the existing control method for the filter, and concentrates on the lowvoltage (below 1 kV AC) two-level voltage-source inverter implementation of the method. The ADUDT uses narrow voltage pulses having a duration in the order of a microsecond from an IGBT (insulated gate bipolar transistor) inverter to control the output voltage of a tuned LC filter circuit. The filter output voltage has thus increased slope transition times at the rising and falling edges, with an opportunity of no overshoot. The effect of the longer slope transition times is a reduction in the du/dt of the voltage fed to the motor cable. Lower du/dt values result in a reduction in the overvoltage effects on the motor terminals. Compared with traditional output filtering methods to accomplish this task, the active du/dt filtering provides lower inductance values and a smaller physical size of the filter itself. The filter circuit weight can also be reduced. However, the power semiconductor nonlinearities skew the filter control pulse pattern, resulting in control deviation. This deviation introduces unwanted overshoot and resonance in the filter. The controlmethod proposed in this thesis is able to directly compensate for the dead time-induced zero-current clamping (ZCC) effect in the pulse pattern. It gives more flexibility to the pattern structure, which could help in the timing deviation compensation design. Previous studies have shown that when a motor load current flows in the filter circuit and the inverter, the phase leg blanking times distort the voltage pulse sequence fed to the filter input. These blanking times are caused by excessively large dead time values between the IGBT control pulses. Moreover, the various switching timing distortions, present in realworld electronics when operating with a microsecond timescale, bring additional skew to the control. Left uncompensated, this results in distortion of the filter input voltage and a filter self-induced overvoltage in the form of an overshoot. This overshoot adds to the voltage appearing at the motor terminals, thus increasing the transient voltage amplitude at the motor. This doctoral thesis investigates the magnitude of such timing deviation effects. If the motor load current is left uncompensated in the control, the filter output voltage can overshoot up to double the input voltage amplitude. IGBT nonlinearities were observed to cause a smaller overshoot, in the order of 30%. This thesis introduces an improved ADUDT control method that is able to compensate for phase leg blanking times, giving flexibility to the pulse pattern structure and dead times. The control method is still sensitive to timing deviations, and their effect is investigated. A simple approach of using a fixed delay compensation value was tried in the test setup measurements. The ADUDT method with the new control algorithm was found to work in an actual motor drive application. Judging by the simulation results, with the delay compensation, the method should ultimately enable an output voltage performance and a du/dt reduction that are free from residual overshoot effects. The proposed control algorithm is not strictly required for successful ADUDT operation: It is possible to precalculate the pulse patterns by iteration and then for instance store them into a look-up table inside the control electronics. Rather, the newly developed control method is a mathematical tool for solving the ADUDT control pulses. It does not contain the timing deviation compensation (from the logic-level command to the phase leg output voltage), and as such is not able to remove the timing deviation effects that cause error and overshoot in the filter. When the timing deviation compensation has to be tuned-in in the control pattern, the precalculated iteration method could prove simpler and equally good (or even better) compared with the mathematical solution with a separate timing compensation module. One of the key findings in this thesis is the conclusion that the correctness of the pulse pattern structure, in the sense of ZCC and predicted pulse timings, cannot be separated from the timing deviations. The usefulness of the correctly calculated pattern is reduced by the voltage edge timing errors. The doctoral thesis provides an introductory background chapter on variable-speed AC drives and the problem of motor overvoltages and takes a look at traditional solutions for overvoltage mitigation. Previous results related to the active du/dt filtering are discussed. The basic operation principle and design of the filter have been studied previously. The effect of load current in the filter and the basic idea of compensation have been presented in the past. However, there was no direct way of including the dead time in the control (except for solving the pulse pattern manually by iteration), and the magnitude of nonlinearity effects had not been investigated. The enhanced control principle with the dead time handling capability and a case study of the test setup timing deviations are the main contributions of this doctoral thesis. The simulation and experimental setup results show that the proposed control method can be used in an actual drive. Loss measurements and a comparison of active du/dt output filtering with traditional output filtering methods are also presented in the work. Two different ADUDT filter designs are included, with ferrite core and air core inductors. Other filters included in the tests were a passive du/dtfilter and a passive sine filter. The loss measurements incorporated a silicon carbide diode-equipped IGBT module, and the results show lower losses with these new device technologies. The new control principle was measured in a 43 A load current motor drive system and was able to bring the filter output peak voltage from 980 V (the previous control principle) down to 680 V in a 540 V average DC link voltage variable-speed drive. A 200 m motor cable was used, and the filter losses for the active du/dt methods were 111W–126 W versus 184 W for the passive du/dt. In terms of inverter and filter losses, the active du/dt filtering method had a 1.82-fold increase in losses compared with an all-passive traditional du/dt output filter. The filter mass with the active du/dt method was 17% (2.4 kg, air-core inductors) compared with 14 kg of the passive du/dt method filter. Silicon carbide freewheeling diodes were found to reduce the inverter losses in the active du/dt filtering by 18% compared with the same IGBT module with silicon diodes. For a 200 m cable length, the average peak voltage at the motor terminals was 1050 V with no filter, 960 V for the all-passive du/dt filter, and 700 V for the active du/dt filtering applying the new control principle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present study, using noise-free simulated signals, we performed a comparative examination of several preprocessing techniques that are used to transform the cardiac event series in a regularly sampled time series, appropriate for spectral analysis of heart rhythm variability (HRV). First, a group of noise-free simulated point event series, which represents a time series of heartbeats, was generated by an integral pulse frequency modulation model. In order to evaluate the performance of the preprocessing methods, the differences between the spectra of the preprocessed simulated signals and the true spectrum (spectrum of the model input modulating signals) were surveyed by visual analysis and by contrasting merit indices. It is desired that estimated spectra match the true spectrum as close as possible, showing a minimum of harmonic components and other artifacts. The merit indices proposed to quantify these mismatches were the leakage rate, defined as a measure of leakage components (located outside some narrow windows centered at frequencies of model input modulating signals) with respect to the whole spectral components, and the numbers of leakage components with amplitudes greater than 1%, 5% and 10% of the total spectral components. Our data, obtained from a noise-free simulation, indicate that the utilization of heart rate values instead of heart period values in the derivation of signals representative of heart rhythm results in more accurate spectra. Furthermore, our data support the efficiency of the widely used preprocessing technique based on the convolution of inverse interval function values with a rectangular window, and suggest the preprocessing technique based on a cubic polynomial interpolation of inverse interval function values and succeeding spectral analysis as another efficient and fast method for the analysis of HRV signals

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Switching power supplies are usually implemented with a control circuitry that uses constant clock frequency turning the power semiconductor switches on and off. A drawback of this customary operating principle is that the switching frequency and harmonic frequencies are present in both the conducted and radiated EMI spectrum of the power converter. Various variable-frequency techniques have been introduced during the last decade to overcome the EMC problem. The main objective of this study was to compare the EMI and steady-state performance of a switch mode power supply with different spread-spectrum/variable-frequency methods. Another goal was to find out suitable tools for the variable-frequency EMI analysis. This thesis can be divided into three main parts: Firstly, some aspects of spectral estimation and measurement are presented. Secondly, selected spread spectrum generation techniques are presented with simulations and background information. Finally, simulations and prototype measurements from the EMC and the steady-state performance are carried out in the last part of this work. Combination of the autocorrelation function, the Welch spectrum estimate and the spectrogram were used as a substitute for ordinary Fourier methods in the EMC analysis. It was also shown that the switching function can be used in preliminary EMC analysis of a SMPS and the spectrum and autocorrelation sequence of a switching function correlates with the final EMI spectrum. This work is based on numerous simulations and measurements made with the prototype. All these simulations and measurements are made with the boost DC/DC converter. Four different variable-frequency modulation techniques in six different configurations were analyzed and the EMI performance was compared to the constant frequency operation. Output voltage and input current waveforms were also analyzed in time domain to see the effect of the spread spectrum operation on these quantities. According to the results presented in this work, spread spectrum modulation can be utilized in power converter for EMI mitigation. The results from steady-state voltage measurements show, that the variable-frequency operation of the SMPS has effect on the voltage ripple, but the ripple measured from the prototype is still acceptable in some applications. Both current and voltage ripple can be controlled with proper main circuit and controller design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An appropriate supplier selection and its profound effects on increasing the competitive advantage of companies has been widely discussed in supply chain management (SCM) literature. By raising environmental awareness among companies and industries they attach more importance to sustainable and green activities in selection procedures of raw material providers. The current thesis benefits from data envelopment analysis (DEA) technique to evaluate the relative efficiency of suppliers in the presence of carbon dioxide (CO2) emission for green supplier selection. We incorporate the pollution of suppliers as an undesirable output into DEA. However, to do so, two conventional DEA model problems arise: the lack of the discrimination power among decision making units (DMUs) and flexibility of the inputs and outputs weights. To overcome these limitations, we use multiple criteria DEA (MCDEA) as one alternative. By applying MCDEA the number of suppliers which are identified as efficient will be decreased and will lead to a better ranking and selection of the suppliers. Besides, in order to compare the performance of the suppliers with an ideal supplier, a “virtual” best practice supplier is introduced. The presence of the ideal virtual supplier will also increase the discrimination power of the model for a better ranking of the suppliers. Therefore, a new MCDEA model is proposed to simultaneously handle undesirable outputs and virtual DMU. The developed model is applied for green supplier selection problem. A numerical example illustrates the applicability of the proposed model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Almost every problem of design, planning and management in the technical and organizational systems has several conflicting goals or interests. Nowadays, multicriteria decision models represent a rapidly developing area of operation research. While solving practical optimization problems, it is necessary to take into account various kinds of uncertainty due to lack of data, inadequacy of mathematical models to real-time processes, calculation errors, etc. In practice, this uncertainty usually leads to undesirable outcomes where the solutions are very sensitive to any changes in the input parameters. An example is the investment managing. Stability analysis of multicriteria discrete optimization problems investigates how the found solutions behave in response to changes in the initial data (input parameters). This thesis is devoted to the stability analysis in the problem of selecting investment project portfolios, which are optimized by considering different types of risk and efficiency of the investment projects. The stability analysis is carried out in two approaches: qualitative and quantitative. The qualitative approach describes the behavior of solutions in conditions with small perturbations in the initial data. The stability of solutions is defined in terms of existence a neighborhood in the initial data space. Any perturbed problem from this neighborhood has stability with respect to the set of efficient solutions of the initial problem. The other approach in the stability analysis studies quantitative measures such as stability radius. This approach gives information about the limits of perturbations in the input parameters, which do not lead to changes in the set of efficient solutions. In present thesis several results were obtained including attainable bounds for the stability radii of Pareto optimal and lexicographically optimal portfolios of the investment problem with Savage's, Wald's criteria and criteria of extreme optimism. In addition, special classes of the problem when the stability radii are expressed by the formulae were indicated. Investigations were completed using different combinations of Chebyshev's, Manhattan and Hölder's metrics, which allowed monitoring input parameters perturbations differently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over time the demand for quantitative portfolio management has increased among financial institutions but there is still a lack of practical tools. In 2008 EDHEC Risk and Asset Management Research Centre conducted a survey of European investment practices. It revealed that the majority of asset or fund management companies, pension funds and institutional investors do not use more sophisticated models to compensate the flaws of the Markowitz mean-variance portfolio optimization. Furthermore, tactical asset allocation managers employ a variety of methods to estimate return and risk of assets, but also need sophisticated portfolio management models to outperform their benchmarks. Recent development in portfolio management suggests that new innovations are slowly gaining ground, but still need to be studied carefully. This thesis tries to provide a practical tactical asset allocation (TAA) application to the Black–Litterman (B–L) approach and unbiased evaluation of B–L models’ qualities. Mean-variance framework, issues related to asset allocation decisions and return forecasting are examined carefully to uncover issues effecting active portfolio management. European fixed income data is employed in an empirical study that tries to reveal whether a B–L model based TAA portfolio is able outperform its strategic benchmark. The tactical asset allocation utilizes Vector Autoregressive (VAR) model to create return forecasts from lagged values of asset classes as well as economic variables. Sample data (31.12.1999–31.12.2012) is divided into two. In-sample data is used for calibrating a strategic portfolio and the out-of-sample period is for testing the tactical portfolio against the strategic benchmark. Results show that B–L model based tactical asset allocation outperforms the benchmark portfolio in terms of risk-adjusted return and mean excess return. The VAR-model is able to pick up the change in investor sentiment and the B–L model adjusts portfolio weights in a controlled manner. TAA portfolio shows promise especially in moderately shifting allocation to more risky assets while market is turning bullish, but without overweighting investments with high beta. Based on findings in thesis, Black–Litterman model offers a good platform for active asset managers to quantify their views on investments and implement their strategies. B–L model shows potential and offers interesting research avenues. However, success of tactical asset allocation is still highly dependent on the quality of input estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study discusses the importance of learning through the process of exporting, and more specifically how such a process can enhance the product innovativeness of a company. The purpose of this study is to investigate the appropriate sources of learning and to suggest an interactive framework for how new knowledge from exporting markets can materialize itself into product innovation. The theoretical background of the study was constructed from academic literature, which is related to concepts of learning by exporting, along with sources for learning in the market and new product development. The empirical research in the form of a qualitative case study was based on four semi-structured interviews and secondary data from the case company official site. The interview data was collected between March and April 2015 from case company employees who directly work in the department of exporting and product development. The method of thematic analysis was used to categorize and interpret the collected data. What was conclusively discovered, was that the knowledge from an exporting market can be an incentive for product innovation, especially an incremental one. Foreign customers and competitors as important sources for new knowledge contribute to the innovative process. Foreign market competitors’ influence on product improvements was high only when the competitor was a market leader or held a colossal market share, while the customers’ influence is always high. Therefore, involving a foreign customer in the development of a new product is vital to a company that is interested in benefiting from what is learned through exporting. The interactive framework, which is based on the theoretical background and findings of the study, suggests that exporting companies can raise their product innovativeness by utilizing newly gained knowledge from exporting markets. Except for input, in the form of sources of learning, and product innovation as an output, the framework contains a process of knowledge transfer, the absorptive capacity of a firm and a new product development process. In addition, the framework and the findings enhance the understanding of the disputed relationship between an exporting experience and product innovation. However, future research is needed in order to fully understand all the elements of the framework, such as the absorptive capacity of a firm as well as more case companies to be processed in order to increase the generalization of the framework

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The successful performance of company in the market relates to the quality management of human capital aiming to improve the company's internal performance and external implementation of the core business strategy. Companies with matrix structure focusing on realization and development of innovation and technologies for the uncertain market need to select thoroughly the approach to HR management system. Human resource management has a significant impact on the organization and use a variety of instruments such as corporate information systems to fulfill their functions and objectives. There are three approaches to strategic control management depending on major impact on the major interference in employee decision-making, development of skills and his integration into the business strategy. The mainstream research has focus only on the framework of strategic planning of HR and general productivity of firm, but not on features of organizational structure and corporate software capabilities for human capital. This study tackles the before mentioned challenges, typical for matrix organization, by using the HR control management tools and corporate information system. The detailed analysis of industry producing and selling electromotor and heating equipment in this master thesis provides the opportunity to improve system for HR control and displays its application in the ERP software. The results emphasize the sustainable role of matrix HR input control for creating of independent project teams for matrix structure who are able to respond to various uncertainties of the market and use their skills for improving performance. Corporate information systems can be integrated into input control system by means of output monitoring to regulate and evaluate the processes of teams, using key performance indicators and reporting systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Canada freedom of information must be viewed in the context of governing -- how do you deal with an abundance of information while balancing a diversity of competing interests? How can you ensure people are informed enough to participate in crucial decision-making, yet willing enough to let some administrative matters be dealt with in camera without their involvement in every detail. In an age when taxpayers' coalition groups are on the rise, and the government is encouraging the establishment of Parent Council groups for schools, the issues and challenges presented by access to information and protection of privacy legislation are real ones. The province of Ontario's decision to extend freedom of information legislation to local governments does not ensure, or equate to, full public disclosure of all facts or necessarily guarantee complete public comprehension of an issue. The mere fact that local governments, like school boards, decide to collect, assemble or record some information and not to collect other information implies that a prior decision was made by "someone" on what was important to record or keep. That in itself means that not all the facts are going to be disclosed, regardless of the presence of legislation. The resulting lack of information can lead to public mistrust and lack of confidence in those who govern. This is completely contrary to the spirit of the legislation which was to provide interested members of the community with facts so that values like political accountability and trust could be ensured and meaningful criticism and input obtained on matters affecting the whole community. This thesis first reviews the historical reasons for adopting freedom of information legislation, reasons which are rooted in our parliamentary system of government. However, the same reasoning for enacting such legislation cannot be applied carte blanche to the municipal level of government in Ontario, or - ii - more specifially to the programs, policies or operations of a school board. The purpose of this thesis is to examine whether the Municipal Freedom of Information and Protection of Privacy Act, 1989 (MFIPPA) was a neccessary step to ensure greater openness from school boards. Based on a review of the Orders made by the Office of the Information and Privacy Commissioner/Ontario, it also assesses how successfully freedom of information legislation has been implemented at the municipal level of government. The Orders provide an opportunity to review what problems school boards have encountered, and what guidance the Commissioner has offered. Reference is made to a value framework as an administrative tool in critically analyzing the suitability of MFIPPA to school boards. The conclusion is drawn that MFIPPA appears to have inhibited rather than facilitated openness in local government. This may be attributed to several factors inclusive of the general uncertainty, confusion and discretion in interpreting various provisions and exemptions in the Act. Some of the uncertainty is due to the fact that an insufficient number of school board staff are familiar with the Act. The complexity of the Act and its legalistic procedures have over-formalized the processes of exchanging information. In addition there appears to be a concern among municipal officials that granting any access to information may be violating personal privacy rights of others. These concerns translate into indecision and extreme caution in responding to inquiries. The result is delay in responding to information requests and lack of uniformity in the responses given. However, the mandatory review of the legislation does afford an opportunity to address some of these problems and to make this complex Act more suitable for application to school boards. In order for the Act to function more efficiently and effectively legislative changes must be made to MFIPPA. It is important that the recommendations for improving the Act be adopted before the government extends this legislation to any other public entities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Some Ecological Factors Affecting the Input and Population Levels of Total and Faecal Coliforms and Salmonella in Twelve Mile Creek, Lake Ontario and Sewage Waters Near St. Catharines, Ontario. Supervisor: Dr. M. Helder. The present study was undertaken to investigate the role of some ecological factors on sewage-Dorne bacteria in waters near St. Catharines, Ontario. Total and faecal coliform levels and the presence of Salmonella were monitored for a period of a year along with determination of temperature, pH, dissolved oxygen, total dissolved solids, nitrate N, total phosphate P and ammonium N. Bacteriological tests for coliform analysis were done according to APHA Standard Methods by the membrane filtration technique. The grab sampling technique was employed for all sampling. Four sample sites were chosen in the Port Dalhousie beach area to determine what bacteriological or physical relationship the sites had to each other. The sample sites chosen were the sewage inflow to and the effluent from the St. Catharines (Port Dalhousie) Pollution Control Plant, Twelve Mile Creek below the sewage outfall and Lake Ontario at the Lakeside Park beach. The sewage outfall was located in Twelve Mile Creek, approximately 80 meters from the creek junction with the beach and piers on Lake Ontario. Twelve Mile Creek normally carried a large volume of water from the WeIland Canal which was diverted through the DeCew Generating Station located on the Niagara Escarpment. An additional sample site, which was thought to be free of industrial wastes, was chosen at Twenty Mile Creek, also in the Niagara Region of Ontarioo 3 There were marked variations in bacterial numbers at each site and between each site, but trends to lower_numbers were noted from the sewage inflow to Lake Ontario. Better correlations were noted between total and faecal coliform population levels and total phosphate P and ammonium N in Twenty Mile Creek. Other correlations were observed for other sample stations, however, these results also appeared to be random in nature. Salmonella isolations occurred more frequently during the winter and spring months when water temperatures were minimal at all sample stations except the sewage inflow. The frequency of Salmonella isolations appeared to be related to increased levels of total and faecal coli forms in the sewage effluent. However, no clear relationships were established in the other sample stations. Due to the presence of Salmonella and high levels of total and faecal coliform indicator organisms, the sanitary quality of Lake Ontario and Twelve Mile Creek at the sample sites seemed to be impaired over the major portion of the study period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regulatory light chain (RLC) phosphorylation in fast twitch muscle is catalyzed by skeletal myosin light chain kinase (skMLCK), a reaction known to increase muscle force, work, and power. The purpose of this study was to explore the contribution of RLC phosphorylation on the power of mouse fast muscle during high frequency (100 Hz) concentric contractions. To determine peak power shortening ramps (1.05 to 0.90 Lo) were applied to Wildtype (WT) and skMLCK knockout (skMLCK-/-) EDL muscles at a range of shortening velocities between 0.05-0.65 of maximal shortening velocity (Vmax), before and after a conditioning stimulus (CS). As a result, mean power was increased to 1.28 ± 0.05 and 1.11 ± .05 of pre-CS values, when collapsed for shortening velocity in WT and skMLCK-/-, respectively (n = 10). In addition, fitting each data set to a second order polynomial revealed that WT mice had significantly higher peak power output (27.67 ± 1.12 W/ kg-1) than skMLCK-/- (25.97 ± 1.02 W/ kg-1), (p < .05). No significant differences in optimal velocity for peak power were found between conditions and genotypes (p > .05). Analysis with Urea Glycerol PAGE determined that RLC phosphate content had been elevated in WT muscles from 8 to 63 % while minimal changes were observed in skMLCK-/- muscles: 3 and 8 %, respectively. Therefore, the lack of stimulation induced increase in RLC phosphate content resulted in a ~40 % smaller enhancement of mean power in skMLCK-/-. The increase in power output in WT mice suggests that RLC phosphorylation is a major potentiating component required for achieving peak muscle performance during brief high frequency concentric contractions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le rôle important joué par la mitochondrie dans la cellule eucaryote est admis depuis longtemps. Cependant, la composition exacte des mitochondries, ainsi que les processus biologiques qui sy déroulent restent encore largement inconnus. Deux facteurs principaux permettent dexpliquer pourquoi létude des mitochondries progresse si lentement : le manque defficacité des méthodes didentification des protéines mitochondriales et le manque de précision dans lannotation de ces protéines. En conséquence, nous avons développé un nouvel outil informatique, YimLoc, qui permet de prédire avec succès les protéines mitochondriales à partir des séquences génomiques. Cet outil intègre plusieurs indicateurs existants, et sa performance est supérieure à celle des indicateurs considérés individuellement. Nous avons analysé environ 60 génomes fongiques avec YimLoc afin de lever la controverse concernant la localisation de la bêta-oxydation dans ces organismes. Contrairement à ce qui était généralement admis, nos résultats montrent que la plupart des groupes de Fungi possèdent une bêta-oxydation mitochondriale. Ce travail met également en évidence la diversité des processus de bêta-oxydation chez les champignons, en corrélation avec leur utilisation des acides gras comme source dénergie et de carbone. De plus, nous avons étudié le composant clef de la voie de bêta-oxydation mitochondriale, lacyl-CoA déshydrogénase (ACAD), dans 250 espèces, couvrant les 3 domaines de la vie, en combinant la prédiction de la localisation subcellulaire avec la classification en sous-familles et linférence phylogénétique. Notre étude suggère que les gènes ACAD font partie dune ancienne famille qui a adopté des stratégies évolutionnaires innovatrices afin de générer un large ensemble denzymes susceptibles dutiliser la plupart des acides gras et des acides aminés. Finalement, afin de permettre la prédiction de protéines mitochondriales à partir de données autres que les séquences génomiques, nous avons développé le logiciel TESTLoc qui utilise comme données des Expressed Sequence Tags (ESTs). La performance de TESTLoc est significativement supérieure à celle de tout autre outil de prédiction connu. En plus de fournir deux nouveaux outils de prédiction de la localisation subcellulaire utilisant différents types de données, nos travaux démontrent comment lassociation de la prédiction de la localisation subcellulaire à dautres méthodes danalyse in silico permet daméliorer la connaissance des protéines mitochondriales. De plus, ces travaux proposent des hypothèses claires et faciles à vérifier par des expériences, ce qui présente un grand potentiel pour faire progresser nos connaissances des métabolismes mitochondriaux.