960 resultados para remainder of Québec


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many of the applications of geometric modelling are concerned with the computation of well-defined properties of the model. The applications which have received less attention are those which address questions to which there is no unique answer. This thesis describes such an application: the automatic production of a dimensioned engineering drawing. One distinctive feature of this operation is the requirement for sophisticated decision-making algorithms at each stage in the processing of the geometric model. Hence, the thesis is focussed upon the design, development and implementation of such algorithms. Various techniques for geometric modelling are briefly examined and then details are given of the modelling package that was developed for this project, The principles of orthographic projection and dimensioning are treated and some published work on the theory of dimensioning is examined. A new theoretical approach to dimensioning is presented and discussed. The existing body of knowledge on decision-making is sampled and the author then shows how methods which were originally developed for management decisions may be adapted to serve the purposes of this project. The remainder of the thesis is devoted to reports on the development of decision-making algorithms for orthographic view selection, sectioning and crosshatching, the preparation of orthographic views with essential hidden detail, and two approaches to the actual insertion of dimension lines and text. The thesis concludes that the theories of decision-making can be applied to work of this kind. It may be possible to generate computer solutions that are closer to the optimum than some man-made dimensioning schemes. Further work on important details is required before a commercially acceptable package could be produced.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose. To assess the relative clinical success of orthokeratology contact lenses (OK) and distance single-vision spectacles (SV) in children in terms of incidences of adverse events and discontinuations over a 2-year period. Methods. Sixty-one subjects 6 to 12 years of age with myopia of - 0.75 to - 4.00DS and astigmatism =1.00DC were prospectively allocated OK or SV correction. Subjects were followed at 6-month intervals and advised to report to the clinic immediately should adverse events occur. Adverse events were categorized into serious, significant, and non-significant. Discontinuation was defined as cessation of lens wear for the remainder of the study. Results. Thirty-one children were corrected with OK and 30 with SV. A higher incidence of adverse events was found with OK compared with SV (p < 0.001). Nine OK subjects experienced 16 adverse events (7 significant and 9 non-significant). No adverse events were found in the SV group. Most adverse events were found between 6 and 12 months of lens wear, with 11 solely attributable to OK wear. Significantly more discontinuations were found with SV in comparison with OK (p < 0.05). Conclusions. The relatively low incidence of adverse events and discontinuations with OK is conducive for the correction of myopia in children with OK contact lenses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Environmental perturbations during early mammalian development can affect aspects of offspring growth and cardiovascular health. We have demonstrated previously that maternal gestational dietary protein restriction in mice significantly elevated adult offspring systolic blood pressure. Therefore, the present study investigates the key mechanisms of blood pressure regulation in these mice. Following mating, female MF-1 mice were assigned to either a normal-protein diet (NPD; 18% casein) or an isocaloric low-protein diet throughout gestation (LPD; 9% casein), or fed the LPD exclusively during the pre-implantation period (3.5d) before returning to the NPD for the remainder of gestation (Emb-LPD). All offspring received standard chow. At 22 weeks, isolated mesenteric arteries from LPD and Emb-LPD males displayed significantly attenuated vasodilatation to isoprenaline (P=0.04 and P=0.025, respectively), when compared with NPD arteries. At 28 weeks, stereological analysis of glomerular number in female left kidneys revealed no significant difference between the groups. Real-time RT-PCR analysis of type 1a angiotensin II receptor, Na /K ATPase transporter subunits and glucocorticoid receptor expression in male and female left kidneys revealed no significant differences between the groups. LPD females displayed elevated serum angiotensin-converting enzyme (ACE) activity (P=0.044), whilst Emb-LPD males had elevated lung ACE activity (P=0.001), when compared with NPD offspring. These data demonstrate that elevated offspring systolic blood pressure following maternal gestational protein undernutrition is associated with impaired arterial vasodilatation in male offspring, elevated serum and lung ACE activity in female and male offspring, respectively, but kidney glomerular number in females and kidney gene expression in male and female offspring appear unaffected. © 2010 The Authors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Early embryonic development is known to be susceptible to maternal undernutrition, leading to a disease-related postnatal phenotype. To determine whether this sensitivity extended into oocyte development, we examined the effect of maternal normal protein diet (18% casein; NPD) or isocaloric low protein diet (9% casein; LPD) restricted to one ovulatory cycle (3.5 days) prior to natural mating in female MF-1 mice. After mating, all females received NPD for the remainder of gestation and all offspring were litter size adjusted and fed standard chow. No difference in gestation length, litter size, sex ratio or postnatal growth was observed between treatments. Maternal LPD did, however, induce abnormal anxiety-related behaviour in open field activities in male and female offspring (P <0.05). Maternal LPD offspring also exhibited elevated systolic blood pressure (SBP) in males at 9 and 15 weeks and in both sexes at 21 weeks (P <0.05). Male LPD offspring hypertension was accompanied by attenuated arterial responsiveness in vitro to vasodilators acetylcholine and isoprenaline (P <0.05). LPD female offspring adult kidneys were also smaller, but had increased nephron numbers (P <0.05). Moreover, the relationship between SBP and kidney or heart size or nephron number was altered by diet treatment (P <0.05). These data demonstrate the sensitivity of mouse maturing oocytes in vivo to maternal protein undernutrition and identify both behavioural and cardiovascular postnatal outcomes, indicative of adult disease. These outcomes probably derive from a direct effect of protein restriction, although indirect stress mechanisms may also be contributory. Similar and distinct postnatal outcomes were observed here compared with maternal LPD treatment during post-fertilization preimplantation development which may reflect the relative contribution of the paternal genome. © Journal compilation © 2008 The Physiological Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present the development and simplification of label-free fiber optic biosensors based on immobilization of oligonucleotides on dual-peak long period gratings (dLPGs). This improvement is the result of a simplification of biofunctionalization methodology. A one-step 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)-mediated reaction has been developed for the straightforward immobilization of unmodified oligonucleotides on the glass fiber surface along the grating region, leading to covalent attachment of a 5´-phosphorylated probe oligonucleotide to the amino-derivatized fiber grating surface. Immobilization is achieved via a 5´phosphate-specific linkage, leaving the remainder of the oligonucleotide accessible for binding reactions. The dLPG has been tested in different external media to demonstrate its inherent ultrahigh sensitivity to the surrounding-medium refractive index (RI) achieving 50- fold improvement in RI sensitivity over the previously-published LPG sensor in media with RI’s relevant to biological assays. After functionalization, the dLPG biosensor was used to monitor the hybridization of complementary oligonucleotides showing a detectable oligonucleotide concentration of 4 nM. The proposed one-step EDC reaction approach can be further extended to develop fiber optic biosensors for disease analysis and medical diagnosis with the advances of label-free, real-time, multiplex, high sensitivity and specificity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The persistence of Salmonella spp. in low moisture foods is a challenge for the food industry as despite control strategies already in place, notable outbreaks still occur. The aim of this study was to characterise isolates of Salmonella, known to be persistent in the food manufacturing environment, by comparing their microbiological characteristics with a panel of matched clinical and veterinary isolates. The gross morphology of the challenge panel was phenotypically characterised in terms of cellular size, shape and motility. In all the parameters measured, the factory isolates were indistinguishable from the human, clinical and veterinary strains. Further detailed metabolic profiling was undertaken using the biolog Microbial ID system. Multivariate analysis of the metabolic microarray revealed differences in metabolism of the factory isolate of S.Montevideo, based on its upregulated ability to utilise glucose and the sugar alcohol groups. The remainder of the serotype-matched isolates were metabolically indistinguishable. Temperature and humidity are known to influence bacterial survival and through environmental monitoring experimental parameters were defined. The results revealed Salmonella survival on stainless steel was affected by environmental temperatures that may be experienced in a food processing environment; with higher survival rates (D25=35.4) at temperatures at 25°C and lower humidity levels of 15% RH, however a rapid decline in cell count (D10=3.4) with lower temperatures of 10°C and higher humidity of 70% RH. Several resident factories strains survived in higher numbers on stainless steel (D25=29.69) compared to serotype matched clinical and veterinary isolates (D25=22.98). Factory isolates of Salmonella did not show an enhanced growth rate in comparison to serotype matched solates grown in Luria broth, Nutrient broth and M9 minimal media indicating that as an independent factor, growth was unlikely to be a major factor driving Salmonella persistence. Using a live / dead stain coupled with fluorescence microscopy revealed that when no longer culturable, isolates of S.Schwarzengrund entered into a viable nonculturable state. The biofilm forming capacity of the panel was characterised and revealed that all were able to form biofilms. None of the factory isolates showed an enhanced capability to form biofilms in comparison to serotype-matched isolates. In disinfection studies, planktonic cells were more susceptible to disinfectants than cells in biofilm and all the disinfectants tested were successful in reducing bacterial load. Contact time was one of the most important factors for reducing bacterial populations in a biofilm. The genomes of eight strains were sequenced. At the nucleotide and amino acid level the food factory isolates were similar to those of isolates from other environments; no major genomic rearrangements were observed, supporting the conclusions of the phenotypic and metabolic analysis. In conclusion, having investigated a variety of morphological, biochemical and genomic factors, it is unlikely that the persistence of Salmonella in the food manufacturing environment is attributable to a single phenotypic, metabolic or genomic factor. Whilst a combination of microbiological factors may be involved it is also possible that strain persistence in the factory environment is a consequence of failure to apply established hygiene management principles.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the discussion - Industry Education: The Merger Continues - by Rob Heiman Assistant Professor Hospitality Food Service Management at Kent State University, the author originally declares, “Integrating the process of an on-going catering and banquet function with that of selected behavioral academic objectives leads to an effective, practical course of instruction in catering and banquet management. Through an illustrated model, this article highlights such a merger while addressing a variety of related problems and concerns to the discipline of hospitality food service management education.” The article stresses the importance of blending the theoretical; curriculum based learning process with that of a hands-on approach, in essence combining an in-reality working program, with academics, to develop a well rounded hospitality student. “How many programs are enjoying the luxury of excessive demand for students from industry [?],” the author asks in proxy for, and to highlight the immense need for qualified personnel in the hospitality industry. As the author describes it, “An ideal education program concerns itself with the integration of theory and simulation with hands-on experience to teach the cognitive as well as the technical skills required to achieve the pre-determined hospitality education objectives.” In food service one way to achieve this integrated learning curve is to have the students prepare foods and then consume them. Heiman suggests this will quickly illustrate to students the rights and wrongs of food preparation. Another way is to have students integrating the academic program with feeding the university population. Your author offers more illustrations on similar principles. Heiman takes special care in characterizing the banquet and catering portions of the food service industry, and he offers empirical data to support the descriptions. It is in these areas, banquet and catering, that Heiman says special attention is needed to produce qualified students to those fields. This is the real focus of the discussion, and it is in this venue that the remainder of the article is devoted. “Based on the perception that quality education is aided by implementing project assignments through the course of study in food service education, a model description can be implemented for a course in Catering and Banquet Management and Operations. This project model first considers the prioritized objectives of education and industry and then illustrates the successful merging of resources for mutual benefits,” Heiman sketches. The model referred to above is also the one aforementioned in the thesis statement at the beginning of the article. This model is divided into six major components; Heiman lists and details them. “The model has been tested through two semesters involving 29 students,” says Heiman. “Reaction by all participants has been extremely positive. Recent graduates of this type of program have received a sound theoretical framework and demonstrated their creative interpretation of this theory in practical application,” Heiman says in summation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In his discourse - The Chef In Society: Origins And Development - Marcel R. Escoffier, Graduate Student, School of Hospitality Management at Florida International University, initially offers: “The role of the modern professional chef has its origins in ancient Greece. The author traces that history and looks at the evolution of the executive chef as a manager and administrator.” “Chefs, as tradespersons, can trace their origins to ancient Greece,” the author offers with citation. “Most were slaves…” he also informs you. Even at that low estate in life, the chef was master of the slaves and servants who were at close hand in the environment in which they worked. “In Athens, a cook was the master of all the household slaves…” says Escoffier. As Athenian influence wanes and Roman civilization picks-up the torch, chefs maintain and increase their status as important tradesmen in society. “Here the first professional societies of cooks were formed, almost a hierarchy,” Escoffier again cites the information. “It was in Rome that cooks established their first academy: Colleqium Coquorum,” he further reports. Chefs, again, increase their significance during the following Italian Renaissance as the scope of their influence widens. “…it is an historical fact that the marriage of Henry IV and Catherine de Medici introduced France to the culinary wonders of the Italian Renaissance,” Escoffier enlightens you. “Certainly the professional chef in France became more sophisticated and more highly regarded by society after the introduction of the Italian cooking concepts.” The author wants you to know that by this time cookbooks are already making important inroads and contributing to the history of cooking above and beyond their obvious informational status. Outside of the apparent European influences in cooking, Escoffier also ephemerally mentions the development of Chinese and Indian chefs. “It is interesting to note that the Chinese, held by at least one theory as the progenitors of most of the culinary heritage, never developed a high esteem for the position of chef,” Escoffier maintains the historical tack. “It was not until the middle 18th Century that the first professional chef went public. Until that time, only the great houses of the nobility could afford to maintain a chef,” Escoffier notes. This private-to-public transition, in conjunction with culinary writing are benchmarks for the profession. Chefs now establish authority and eminence. The remainder of the article devotes itself to the development of the professional chef; especially the melding of two seminal figures in the culinary arts, Cesar Ritz and August Escoffier. The works of Frederick Taylor are also highlighted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette recherche constitue un essai de théorie critique féministe matérialiste et radicale. Elle poursuit principalement un objectif de dénonciation de la structure actuelle du droit du logement. À partir d’un cadre conceptuel fondé sur le féminisme matérialiste et radical, elle souhaite faire ressortir le point de vue de la classe des femmes dans l’habitation. Le droit du logement est ici utilisé dans un sens large, puisqu’il se réfère à la fois au logement comme phénomène juridique, mais aussi sociologique. À l’intérieur de la discipline juridique, il renvoie à l’ensemble des législations actuellement en vigueur au Québec en ce qui concerne la vie à domicile. Notre étude se concentre sur deux modes d’occupation des lieux, à travers le droit de propriété et le système locatif. Le droit au logement fait l’objet d’une reconnaissance internationale dans les textes portant sur les droits humains. Il est reconnu comme le « droit à un logement suffisant ». Au Canada et au Québec, il ne fait pas l’objet d’une reconnaissance explicite, malgré les engagements pris sur la scène internationale. Un portrait statistique, appuyé sur le critère du sexe, permet de mettre en évidence qu’il existe des écarts entre les hommes et les femmes en ce qui concerne la mise en application du droit du logement. Les femmes accèdent plus difficilement à un logement; elles y effectuent la majorité du travail domestique, de service et de « care » et elles sont les principales victimes des violences commises à domicile. Dans le système d’habitation, l’expérience des femmes se comprend comme une appropriation à la fois privée et collective par la classe des hommes, telle que réfléchie par Colette Guillaumin, qui se concentre autour de la division sexuelle du travail et des violences sexuées. Le droit du logement, dans sa forme actuelle, repose sur l’appropriation de la force de travail des femmes et de leur corps. Ces deux critères permettent de construire une grille d’analyse féministe matérialiste et radicale pour analyser la structure du droit du logement, tel que conçu en droit civil. Cette analyse féministe permet également de situer le droit étatique comme une pratique patriarcale. Cette dernière contribue à assurer le maintien du système d’habitation, qui est assimilable à un système hégémonique, au sens développé par Gramsci. Cette étude réfléchit sur le droit du logement dans le climat politique néolibéral. Le néolibéralisme est développé comme une idéologie qui impose une rationalité marchande à l’ensemble des politiques étatiques. À partir d’une méthode décrite comme métathéorique externe radicalement réflexive, puisqu’elle propose l’importation d’outils conceptuels étrangers à la discipline du droit moderne, nous réfléchissons de manière radicale la construction du droit civil et des institutions qui encadrent le droit du logement. La collecte des données s’effectue à partir de la recherche documentaire. Quatre institutions du droit civil seront examinées dans le détail, soit le sujet du droit, la dichotomie privé/public, la médiation du droit du logement par les biens immeubles, à travers le rapport contractuel et le droit de propriété, et finalement les notaires. L’analyse féministe du sujet du droit insiste sur un paradoxe. D’une part, l’universalité présumée de ce sujet, laquelle permet de poser l’égalité et la liberté pour toutes les personnes juridiques. Or, plutôt que d’être neutre sexuellement comme le prétend le droit positif, nous démontrons comment ce sujet est constamment un membre de la classe des hommes. D’autre part, nous analysons comment le droit reconnaît le sexe de ses sujets, mais surtout comment cette sexualité est construite sur l’idéologie naturaliste. Ce modèle de sujet masculin est fondamental dans la construction du droit du logement. L’étude féministe de la dichotomie privé/public en fait ressortir le caractère situé. En effet, si par essence aucun domaine ou enjeu n’est en soit privé ou public, le processus de qualification, lui, est un acte de pouvoir. Nous verrons comment le droit civil crée des zones de droit privé, comprises comme des zones de non-droit pour les femmes. La qualification de privé dévalue également le travail accompli par cette classe de sexe. Le droit du logement est pourtant centré sur le rapport contractuel et sur le droit de propriété. Il importe alors d’examiner la nature du consentement donné par les femmes comme groupe social dans les contrats de vente et de location. Ces contrats ne prennent pas en compte l’expérience des femmes dans leur formation. Les catégories qui y sont attachées, telles que vendeur.e ou locataire, représentent le point de vue de la classe des hommes. Bien que la popularité de la copropriété auprès de la classe des femmes semble porteuse d’un vent de changement, nous analysons comment le discours dominant qui l’entoure instrumentalise certaines revendications féministes, tout en laissant dans l’ombre la question du travail domestique et des violences sexuées. Finalement, nous nous intéressons aux notaires en les repensant comme des intellectuel.les organiques, tels que conçu.es par Gramsci, pour la classe des hommes. Cette fonction d’intellectuel.les permet de mettre en lumière comment chaque transaction immobilière favorise la reproduction des intérêts patriarcaux, remettant ainsi en question la nature des devoirs de conseil et d’impartialité du notariat. À la lumière de cette analyse, le Code civil du Québec est qualifié dans une perspective féministe matérialiste et radicale pour devenir un système qui institutionnalise l’appropriation des femmes par l’entremise du droit du logement. Ce travail de recherche permet d’envisager certaines pistes de réflexion pour des rénovations potentielles des pratiques juridiques entourant le droit du logement, notamment la pratique notariale, tournées vers des objectifs féministes de justice sociale.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The section of CN railway between Vancouver and Kamloops runs along the base of many hazardous slopes, including the White Canyon, which is located just outside the town of Lytton, BC. The slope has a history of frequent rockfall activity, which presents a hazard to the railway below. Rockfall inventories can be used to understand the frequency-magnitude relationship of events on hazardous slopes, however it can be difficult to consistently and accurately identify rockfall source zones and volumes on large slopes with frequent activity, leaving many inventories incomplete. We have studied this slope as a part of the Canadian Railway Ground Hazard Research Program and have collected remote sensing data, including terrestrial laser scanning (TLS), photographs, and photogrammetry data since 2012, and used change detection to identify rockfalls on the slope. The objective of this thesis is to use a subset of this data to understand how rockfalls identified from TLS data could be used to understand the frequency-magnitude relationship of rockfalls on the slope. This includes incorporating both new and existing methods to develop a semi-automated workflow to extract rockfall events from the TLS data. We show that these methods can be used to identify events as small as 0.01 m3 and that the duration between scans can have an effect on the frequency-magnitude relationship of the rockfalls. We also show that by incorporating photogrammetry data into our analysis, we can create a 3D geological model of the slope and use this to classify rockfalls by lithology, to further understand the rockfall failure patterns. When relating the rockfall activity to triggering factors, we found that the amount of precipitation occurring over the winter has an effect on the overall rockfall frequency for the remainder of the year. These results can provide the railways with a more complete inventory of events compared to records created through track inspection, or rockfall monitoring systems that are installed on the slope. In addition, we can use the database to understand the spatial and temporal distribution of events. The results can also be used as an input to rockfall modelling programs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

n January 2014, the Northern Ireland Policing Board (NIPB) commissioned the University of Ulster to conduct research into public confidence in policing to help inform the work of the Board and its oversight of police service delivery. More specifically, the research team were tasked with exploring ‘the influence that politicians, community leaders and the media have on public confidence in policing in Northern Ireland’. To date, the subject of ‘confidence in policing’ within a Northern Ireland context has been relatively under researched, both in academic and policy terms. Thus, the present research is the first empirical research to be produced in Northern Ireland which considers the issue of confidence in policing from the perspective of community leaders, politicians and the media – including the key influences and dynamics which underpin police confidence at a community level.

The report begins with a comprehensive review of academic literature, policy documents and contemporary events related to confidence in policing. The research then provides an overview of the methodology used to undertake the research, with the remainder of the report comprised of the findings from the discussions with representatives from the media, political parties and the community and voluntary sector who participated. The report concludes with an overview of the central findings along with a series of recommendations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We show that the theory of involutive bases can be combined with discrete algebraic Morse Theory. For a graded k[x0 ...,xn]-module M, this yields a free resolution G, which in general is not minimal. We see that G is isomorphic to the resolution induced by an involutive basis. It is possible to identify involutive bases inside the resolution G. The shape of G is given by a concrete description. Regarding the differential dG, several rules are established for its computation, which are based on the fact that in the computation of dG certain patterns appear at several positions. In particular, it is possible to compute the constants independent of the remainder of the differential. This allows us, starting from G, to determine the Betti numbers of M without computing a minimal free resolution: Thus we obtain a new algorithm to compute Betti numbers. This algorithm has been implemented in CoCoALib by Mario Albert. This way, in comparison to some other computer algebra system, Betti numbers can be computed faster in most of the examples we have considered. For Veronese subrings S(d), we have found a Pommaret basis, which yields new proofs for some known properties of these rings. Via the theoretical statements found for G, we can identify some generators of modules in G where no constants appear. As a direct consequence, some non-vanishing Betti numbers of S(d) can be given. Finally, we give a proof of the Hyperplane Restriction Theorem with the help of Pommaret bases. This part is largely independent of the other parts of this work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les entreprises qui se servent des technologies de l’information et de la communication (TIC) pour innover mesurent des impacts très inégaux sur leurs performances. Cela dépend principalement de l’intensité, l’ampleur et la cohérence de l’innovation systémique auxquelles elles procèdent. En l’état de la connaissance, la littérature identifie le besoin de mieux comprendre cette notion d’alignement systémique. Cette thèse propose d’aborder cette question à l’aide de la notion d’innovation du modèle d’affaires. Une revue systématique de la littérature a été réalisée. D’un point de vue conceptuel, elle contribue à mieux définir le concept de modèle d’affaires, et a permis de réaliser une typologie des différents cadres de modélisation d’affaires. Le cadre conceptuel qui en est issu aborde le sujet de l’innovation du modèle d’affaires à trois niveaux : (1) stratégique à travers la proposition d’une matrice de positionnement de l’innovation de modèle d’affaires et l’identification de trajectoires d’innovation ; (2) configuration du modèle d’affaires à l’aide du cadre de modélisation tétraédrique de Caisse et Montreuil ; et (3) opérationnel-tactique par l’analyse des facteurs clés de succès et des chaines structurantes. Du fait du caractère émergeant de la littérature sur le sujet, la méthodologie choisie est une étude de cas comparés. Trois études de cas d’entreprises québécoises ont été réalisées. Elles oeuvrent dans des secteurs variés, et ont procédé à une innovation significative de leur modèle d’affaires en s’appuyant sur des TIC,. La recherche conclut à la pertinence de l’utilisation du concept de modèle d’affaires en ce qui concerne l’analyse de l’alignement systémique dans un contexte d’innovation appuyée sur les TIC. Neuf propositions de recherche sont énoncées et ouvrent la voie à de futures recherches

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Le genre des emprunts lexicaux référant à des entités non-sexuées en français est parfois considéré comme arbitraire, alors qu’il est parfois vu comme motivé par sa forme physique et/ou ses significations. Puisque les avis diffèrent à ce sujet, nous nous sommes intéressé à analyser de nombreux critères pouvant contribuer à l’attribution du genre d’un emprunt. Nous avons constitué quatre corpus, chacun composé de textes issus d’une communauté linguistique emprunteuse (Québec et Europe) et d’un niveau de formalité (formel ou informel). Nous avons observé que le genre des emprunts varie considérablement dans de nombreux cas. Nous constatons que les emprunts des langues à genre (italien, arabe) conservent généralement leur genre originel. Les critères sémantiques et de forme physique peuvent autant justifier le genre d’un emprunt l’un que l’autre. Le critère sémantique le plus opératoire intègre chaque emprunt dans un paradigme conceptuel regroupant plusieurs unités lexicales sous une même conceptualisation et, généralement, un genre commun au paradigme.