1000 resultados para Commission Error
Resumo:
Ce mémoire revient sur la première tutelle de la Ville de Montréal, imposée par le gouvernement provincial de 1918 à 1921. Pour l’occasion, le Lieutenant-gouverneur du Québec nomme cinq administrateurs afin de gérer les affaires courantes de la municipalité. Peu connu des historiens et du public, cet événement suscite des changements profonds dans les structures politiques et administratives de la Ville qui laissent des empreintes dans la vie quotidienne actuelle des Montréalais. Puisqu’ils ne sont pas redevables devant la population, les commissaires mettent en œuvre plusieurs réformes souvent impopulaires qui permettent de rétablir l’équilibre budgétaire de la Ville. Au passage, ils tentent de moderniser l’administration municipale dont le fonctionnement est jusque-là incompatible avec les réalités d’une population grandissante et d’un espace urbain accru par les nombreuses annexions. Notre étude souligne les réformes implantées par la Commission administrative au niveau de la fiscalité, de l’organisation des services municipaux et des politiques d'urbanisme. Elles s’inspirent de réformes mises en œuvre dans plusieurs villes nord-américaines de grande taille. Durant leur mandat, les nouveaux administrateurs cherchent à imposer un modèle d’administration s’inspirant de grandes entreprises privées et réussissent à réduire de manière substantielle le déficit de la Ville. Enfin, une attention particulière est accordée à la fin du mandat de la Commission administrative et au régime administratif qui lui fait suite.
Resumo:
Travail créatif / Creative Work
Resumo:
Contexte. Plusieurs déterminants influencent les pratiques alimentaires des jeunes Québécois. Les aliments offerts dans les établissements scolaires ainsi que les politiques publiques figurent parmi ces facteurs d’influence. En 2011, la Commission scolaire de Montréal (CSDM) a adapté la Politique pour une saine alimentation. Toutefois, depuis sa parution, très peu de données concernant l’application de la politique des écoles de la CSDM ont été recueillies. Objectifs. Évaluer la satisfaction et la perception des élèves de troisième cycle du primaire et du secondaire à l’égard des services alimentaires de la CSDM. Méthode. Soixante-cinq écoles primaires et 27 écoles secondaires ont été ciblées par cette enquête. La collecte de données s’est déroulée sur une période de six semaines par le biais de deux questionnaires distincts disponibles sur le site internet Fluid Survey. La satisfaction et la perception des élèves ont été évaluées à l’aide de questions portant sur leur utilisation du service alimentaire, leurs préférences, ainsi que leurs attitudes et opinions. Analyse statistique. À partir des résultats obtenus, des statistiques descriptives ont été générées. De plus, des tests de Chi-deux ont été produits afin d’évaluer l’association de certaines variables. Résultats. Au total, 4 446 questionnaires ont été retenus aux fins d’analyses. Sommairement, le goût et la propreté des lieux sont deux aspects pour lesquels un grand nombre de jeunes du troisième cycle du primaire et du secondaire accordent de l’importance. Toutefois, la variété de l’offre alimentaire est l’aspect le plus apprécié par les deux groupes de jeunes. Les élèves du primaire sont plus nombreux que ceux du secondaire à accorder de l’importance à l’attrait santé des aliments. Par ailleurs, les plus jeunes élèves choisissent davantage leurs aliments pour contrôler leur poids. Les résultats montrent aussi que la majorité des jeunes du primaire et du secondaire accordent de l’importance à la qualité de leur alimentation, et ce, depuis quelque temps. Finalement, l’opinion des pairs et de la famille semble avoir très peu d’influence sur les choix alimentaires des jeunes, notamment chez les élèves du secondaire. Conclusion. Bien que plusieurs autres déterminants doivent encore être étudiés, ce projet de recherche confirme la pluralité des facteurs d’influence sur les pratiques alimentaires des enfants et des adolescents. Cela dit, en plus d’être un indicateur précieux pour l’évaluation de ses services alimentaires, les résultats obtenus suggèrent également une multitude de pistes d’action pour les intervenants de la CSDM.
Resumo:
Cette recherche est issue d'un questionnement personnel au regard d'impressions singulières ressenties lors de certaines interactions professionnelles avec des confrères de commissions scolaires distinctes au niveau de la langue d’enseignement. Elle compare les cultures organisationnelles de deux commissions scolaires différentes par la langue d'enseignement et de travail : une commission scolaire francophone et une commission scolaire anglophone. Ces cultures organisationnelles sont esquissées à partir de propos recueillis auprès de cadres intermédiaires issus de différentes unités administratives de chacune des commissions scolaires. Ce statut d'emploi a été choisi car ces cadres sont au cœur des flux informationnels entre le sommet stratégique et les centres opérationnels. De plus, bien qu’ils interviennent officiellement dans les processus consultatifs et décisionnels de leur commission scolaire, leurs rôles sont peu étudiés par les chercheurs en administration. Cette recherche exploratoire de deux commissions scolaires utilise une approche multiperspective afin d'éclairer les différentes facettes que peut présenter une culture organisationnelle. Trois perspectives sont considérées : la perspective de l'intégration qui explore les caractéristiques culturelles qui favorisent une cohérence des comportements des acteurs aux objectifs organisationnels; la perspective de la différenciation qui tente de discerner l'existence de sous-cultures dans les organisations; la perspective de la fragmentation qui interroge les significations particulières que peuvent attribuer, aux actions et aux décisions des pairs, certains regroupement d'individus. Deux processus d'enquête ont été utilisés dans cette recherche : l'interview semi-directif et la recherche documentaire. Les données recueillies ont été analysées selon le procédé de l'analyse thématique. Ainsi, les propos émis par les cadres intermédiaires ont été transposés en un certain nombre de thèmes en rapport avec l'orientation de recherche. Les résultats révèlent que les cadres intermédiaires sont des acteurs réflexifs dans l'appropriation, la construction et la diffusion de la culture générale de leur commission scolaire, mais également d'une culture identitaire de leur unité administrative. De plus, des différences significatives ont été identifiées, entre autres, sur l'identification des éléments culturels propre à chacun des groupes linguistiques. Alors que les cadres de la commission scolaire francophone décrivent leur culture comme une structure d'encadrement des processus consultatifs, décisionnels et d'accompagnement, les cadres de la commission scolaire anglophone mentionnent surtout des valeurs associées à des postulats de base issus de leur appartenance linguistique.
Resumo:
Commentaire / Commentary
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
The study clearly brings out the role of commission agents in the traditional marine fisheries sector and thereby goes to set at rest the controversy regarding their role. The findings of the study has important implications for formulation of policies and development strategies related to the traditional marine fisheries sector. The study points out the need for a thorough review and reformulation of the policies and development strategies for efficiently achieving the development potential of the traditional marine fisheries sector and for improving the economic conditions of the fishermen. The study is based mostly on Alappuzha District of Kerala, covering all the 30 marine fishing villages, spread over the three coastal taluks, namely, Karthikappally, Ambalappuzha and Cherthala
Resumo:
In recent years, reversible logic has emerged as one of the most important approaches for power optimization with its application in low power CMOS, quantum computing and nanotechnology. Low power circuits implemented using reversible logic that provides single error correction – double error detection (SEC-DED) is proposed in this paper. The design is done using a new 4 x 4 reversible gate called ‘HCG’ for implementing hamming error coding and detection circuits. A parity preserving HCG (PPHCG) that preserves the input parity at the output bits is used for achieving fault tolerance for the hamming error coding and detection circuits.
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained
Resumo:
The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.
Resumo:
The aim of this paper is the investigation of the error which results from the method of approximate approximations applied to functions defined on compact in- tervals, only. This method, which is based on an approximate partition of unity, was introduced by V. Mazya in 1991 and has mainly been used for functions defied on the whole space up to now. For the treatment of differential equations and boundary integral equations, however, an efficient approximation procedure on compact intervals is needed. In the present paper we apply the method of approximate approximations to functions which are defined on compact intervals. In contrast to the whole space case here a truncation error has to be controlled in addition. For the resulting total error pointwise estimates and L1-estimates are given, where all the constants are determined explicitly.
Resumo:
The aim of this paper is the numerical treatment of a boundary value problem for the system of Stokes' equations. For this we extend the method of approximate approximations to boundary value problems. This method was introduced by V. Maz'ya in 1991 and has been used until now for the approximation of smooth functions defined on the whole space and for the approximation of volume potentials. In the present paper we develop an approximation procedure for the solution of the interior Dirichlet problem for the system of Stokes' equations in two dimensions. The procedure is based on potential theoretical considerations in connection with a boundary integral equations method and consists of three approximation steps as follows. In a first step the unknown source density in the potential representation of the solution is replaced by approximate approximations. In a second step the decay behavior of the generating functions is used to gain a suitable approximation for the potential kernel, and in a third step Nyström's method leads to a linear algebraic system for the approximate source density. For every step a convergence analysis is established and corresponding error estimates are given.