887 resultados para problems with child neglect reporting
Resumo:
We present a new algorithm for exactly solving decision-making problems represented as an influence diagram. We do not require the usual assumptions of no forgetting and regularity, which allows us to solve problems with limited information. The algorithm, which implements a sophisticated variable elimination procedure, is empirically shown to outperform a state-of-the-art algorithm in randomly generated problems of up to 150 variables and 10^64 strategies.
Resumo:
This report outlines the findings from a research project examining what works well in investigative interviews (ABE interviews) with child witnesses in Northern Ireland. The project was developed in collaboration with key stakeholders and was joint funded by the Department of Justice NI, NSPCC, SBNI and PSNI. While there is substantial a research literature examining the practice of forensic interview both internationally and within the UK there has been little in the way of exploration of this issue in Northern Ireland. Equally, the existing literature has tended to focus on a ‘deficit’ approach, identifying areas of poor practice with limited recognition of the practical difficulties interview practitioners face or what works well for them in practice. This study aimed to address these gaps by adopting an ‘appreciative inquiry’ approach to explore stakeholder perspectives on what is working well within ABE current practice and identify what can be built on to deliver optimal practice.
Resumo:
The shifted Legendre orthogonal polynomials are used for the numerical solution of a new formulation for the multi-dimensional fractional optimal control problem (M-DFOCP) with a quadratic performance index. The fractional derivatives are described in the Caputo sense. The Lagrange multiplier method for the constrained extremum and the operational matrix of fractional integrals are used together with the help of the properties of the shifted Legendre orthonormal polynomials. The method reduces the M-DFOCP to a simpler problem that consists of solving a system of algebraic equations. For confirming the efficiency and accuracy of the proposed scheme, some test problems are implemented with their approximate solutions.
Resumo:
In dieser Dissertation werden Methoden zur optimalen Aufgabenverteilung in Multirobotersystemen (engl. Multi-Robot Task Allocation – MRTA) zur Inspektion von Industrieanlagen untersucht. MRTA umfasst die Verteilung und Ablaufplanung von Aufgaben für eine Gruppe von Robotern unter Berücksichtigung von operativen Randbedingungen mit dem Ziel, die Gesamteinsatzkosten zu minimieren. Dank zunehmendem technischen Fortschritt und sinkenden Technologiekosten ist das Interesse an mobilen Robotern für den Industrieeinsatz in den letzten Jahren stark gestiegen. Viele Arbeiten konzentrieren sich auf Probleme der Mobilität wie Selbstlokalisierung und Kartierung, aber nur wenige Arbeiten untersuchen die optimale Aufgabenverteilung. Da sich mit einer guten Aufgabenverteilung eine effizientere Planung erreichen lässt (z. B. niedrigere Kosten, kürzere Ausführungszeit), ist das Ziel dieser Arbeit die Entwicklung von Lösungsmethoden für das aus Inspektionsaufgaben mit Einzel- und Zweiroboteraufgaben folgende Such-/Optimierungsproblem. Ein neuartiger hybrider Genetischer Algorithmus wird vorgestellt, der einen teilbevölkerungbasierten Genetischen Algorithmus zur globalen Optimierung mit lokalen Suchheuristiken kombiniert. Zur Beschleunigung dieses Algorithmus werden auf die fittesten Individuen einer Generation lokale Suchoperatoren angewendet. Der vorgestellte Algorithmus verteilt die Aufgaben nicht nur einfach und legt den Ablauf fest, sondern er bildet auch temporäre Roboterverbünde für Zweiroboteraufgaben, wodurch räumliche und zeitliche Randbedingungen entstehen. Vier alternative Kodierungsstrategien werden für den vorgestellten Algorithmus entworfen: Teilaufgabenbasierte Kodierung: Hierdurch werden alle möglichen Lösungen abgedeckt, allerdings ist der Suchraum sehr groß. Aufgabenbasierte Kodierung: Zwei Möglichkeiten zur Zuweisung von Zweiroboteraufgaben wurden implementiert, um die Effizienz des Algorithmus zu steigern. Gruppierungsbasierte Kodierung: Zeitliche Randbedingungen zur Gruppierung von Aufgaben werden vorgestellt, um gute Lösungen innerhalb einer kleinen Anzahl von Generationen zu erhalten. Zwei Umsetzungsvarianten werden vorgestellt. Dekompositionsbasierte Kodierung: Drei geometrische Zerlegungen wurden entworfen, die Informationen über die räumliche Anordnung ausnutzen, um Probleme zu lösen, die Inspektionsgebiete mit rechteckigen Geometrien aufweisen. In Simulationsstudien wird die Leistungsfähigkeit der verschiedenen hybriden Genetischen Algorithmen untersucht. Dazu wurde die Inspektion von Tanklagern einer Erdölraffinerie mit einer Gruppe homogener Inspektionsroboter als Anwendungsfall gewählt. Die Simulationen zeigen, dass Kodierungsstrategien, die auf der geometrischen Zerlegung basieren, bei einer kleinen Anzahl an Generationen eine bessere Lösung finden können als die anderen untersuchten Strategien. Diese Arbeit beschäftigt sich mit Einzel- und Zweiroboteraufgaben, die entweder von einem einzelnen mobilen Roboter erledigt werden können oder die Zusammenarbeit von zwei Robotern erfordern. Eine Erweiterung des entwickelten Algorithmus zur Behandlung von Aufgaben, die mehr als zwei Roboter erfordern, ist möglich, würde aber die Komplexität der Optimierungsaufgabe deutlich vergrößern.
Resumo:
A list of considerations on the problems with large groups. This material was sent to Debra Morris in 2007, by David Jaques, an educationist with many years' experience of working with groups.
Resumo:
Particle size distribution (psd) is one of the most important features of the soil because it affects many of its other properties, and it determines how soil should be managed. To understand the properties of chalk soil, psd analyses should be based on the original material (including carbonates), and not just the acid-resistant fraction. Laser-based methods rather than traditional sedimentation methods are being used increasingly to determine particle size to reduce the cost of analysis. We give an overview of both approaches and the problems associated with them for analyzing the psd of chalk soil. In particular, we show that it is not appropriate to use the widely adopted 8 pm boundary between the clay and silt size fractions for samples determined by laser to estimate proportions of these size fractions that are equivalent to those based on sedimentation. We present data from field and national-scale surveys of soil derived from chalk in England. Results from both types of survey showed that laser methods tend to over-estimate the clay-size fraction compared to sedimentation for the 8 mu m clay/silt boundary, and we suggest reasons for this. For soil derived from chalk, either the sedimentation methods need to be modified or it would be more appropriate to use a 4 pm threshold as an interim solution for laser methods. Correlations between the proportions of sand- and clay-sized fractions, and other properties such as organic matter and volumetric water content, were the opposite of what one would expect for soil dominated by silicate minerals. For water content, this appeared to be due to the predominance of porous, chalk fragments in the sand-sized fraction rather than quartz grains, and the abundance of fine (<2 mu m) calcite crystals rather than phyllosilicates in the clay-sized fraction. This was confirmed by scanning electron microscope (SEM) analyses. "Of all the rocks with which 1 am acquainted, there is none whose formation seems to tax the ingenuity of theorists so severely, as the chalk, in whatever respect we may think fit to consider it". Thomas Allan, FRS Edinburgh 1823, Transactions of the Royal Society of Edinburgh. (C) 2009 Natural Environment Research Council (NERC) Published by Elsevier B.V. All rights reserved.
Resumo:
The objective of this study was to determine insight in patients with Huntington's disease (HD) by contrasting patients' ability to rate their own behavior with their ability to rate a person other than themselves. HD patients and carers completed the Dysexecutive Questionnaire (DEX), rating themselves and each other at two time points. The temporal stability of these ratings was initially examined using these two time points since there is no published test-retest reliability of the DEX with this Population to date. This was followed by a comparison of patients' self-ratings and carer's independent ratings of patients by performing correlations with patients' disease variables, and in exploratory factor analysis was conducted on both sets of ratings. The DEX showed good test-retest reliability, with patients consistently and persistently underestimating the degree of their dysexecutive behavior, but not that of their carers. Patients' self-ratings and caters' ratings of patients both showed that dysexecutive behavior in HD can be fractionated into three underlying components (Cognition, Self-regulation, Insight), and the relative ranking of these factors was similar for both data sets. HD patients consistently underestimated the extent of only their own dysexecutive behaviors relative to carers' ratings by 26%, but were similar in ascribing ranks to the components of dysexecutive behavior. (c) 2005 Movement Disorder Society.
Resumo:
This paper illustrates how nonlinear programming and simulation tools, which are available in packages such as MATLAB and SIMULINK, can easily be used to solve optimal control problems with state- and/or input-dependent inequality constraints. The method presented is illustrated with a model of a single-link manipulator. The method is suitable to be taught to advanced undergraduate and Master's level students in control engineering.
Resumo:
We investigate the spectrum of certain integro-differential-delay equations (IDDEs) which arise naturally within spatially distributed, nonlocal, pattern formation problems. Our approach is based on the reformulation of the relevant dispersion relations with the use of the Lambert function. As a particular application of this approach, we consider the case of the Amari delay neural field equation which describes the local activity of a population of neurons taking into consideration the finite propagation speed of the electric signal. We show that if the kernel appearing in this equation is symmetric around some point a= 0 or consists of a sum of such terms, then the relevant dispersion relation yields spectra with an infinite number of branches, as opposed to finite sets of eigenvalues considered in previous works. Also, in earlier works the focus has been on the most rightward part of the spectrum and the possibility of an instability driven pattern formation. Here, we numerically survey the structure of the entire spectra and argue that a detailed knowledge of this structure is important within neurodynamical applications. Indeed, the Amari IDDE acts as a filter with the ability to recognise and respond whenever it is excited in such a way so as to resonate with one of its rightward modes, thereby amplifying such inputs and dampening others. Finally, we discuss how these results can be generalised to the case of systems of IDDEs.
Resumo:
In a previous paper (J. of Differential Equations, Vol. 249 (2010), 3081-3098) we examined a family of periodic Sturm-Liouville problems with boundary and interior singularities which are highly non-self-adjoint but have only real eigenvalues. We now establish Schatten class properties of the associated resolvent operator.
Resumo:
Several previous studies have attempted to assess the sublimation depth-scales of ice particles from clouds into clear air. Upon examining the sublimation depth-scales in the Met Office Unified Model (MetUM), it was found that the MetUM has evaporation depth-scales 2–3 times larger than radar observations. Similar results can be seen in the European Centre for Medium-Range Weather Forecasts (ECMWF), Regional Atmospheric Climate Model (RACMO) and Météo-France models. In this study, we use radar simulation (converting model variables into radar observations) and one-dimensional explicit microphysics numerical modelling to test and diagnose the cause of the deep sublimation depth-scales in the forecast model. The MetUM data and parametrization scheme are used to predict terminal velocity, which can be compared with the observed Doppler velocity. This can then be used to test the hypothesis as to why the sublimation depth-scale is too large within the MetUM. Turbulence could lead to dry air entrainment and higher evaporation rates; particle density may be wrong, particle capacitance may be too high and lead to incorrect evaporation rates or the humidity within the sublimating layer may be incorrectly represented. We show that the most likely cause of deep sublimation zones is an incorrect representation of model humidity in the layer. This is tested further by using a one-dimensional explicit microphysics model, which tests the sensitivity of ice sublimation to key atmospheric variables and is capable of including sonde and radar measurements to simulate real cases. Results suggest that the MetUM grid resolution at ice cloud altitudes is not sufficient enough to maintain the sharp drop in humidity that is observed in the sublimation zone.