950 resultados para generalized assignment
Resumo:
Since multimedia data, such as images and videos, are way more expressive and informative than ordinary text-based data, people find it more attractive to communicate and express with them. Additionally, with the rising popularity of social networking tools such as Facebook and Twitter, multimedia information retrieval can no longer be considered a solitary task. Rather, people constantly collaborate with one another while searching and retrieving information. But the very cause of the popularity of multimedia data, the huge and different types of information a single data object can carry, makes their management a challenging task. Multimedia data is commonly represented as multidimensional feature vectors and carry high-level semantic information. These two characteristics make them very different from traditional alpha-numeric data. Thus, to try to manage them with frameworks and rationales designed for primitive alpha-numeric data, will be inefficient. An index structure is the backbone of any database management system. It has been seen that index structures present in existing relational database management frameworks cannot handle multimedia data effectively. Thus, in this dissertation, a generalized multidimensional index structure is proposed which accommodates the atypical multidimensional representation and the semantic information carried by different multimedia data seamlessly from within one single framework. Additionally, the dissertation investigates the evolving relationships among multimedia data in a collaborative environment and how such information can help to customize the design of the proposed index structure, when it is used to manage multimedia data in a shared environment. Extensive experiments were conducted to present the usability and better performance of the proposed framework over current state-of-art approaches.
Resumo:
This study examined assignment of withdrawal codes by school administrators in two disciplinary alternative schools. Findings revealed: (a) codes were inaccurately assigned intentionally to keep students from returning to a regular school without notification, and (b) administrators improperly tracked students and failed to ascertain students’ reasons for dropping out.
Resumo:
The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^
Resumo:
Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. ^ Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. ^ Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. ^ With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.^
Resumo:
As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.
Resumo:
Since DSM-III-R criteria for Overanxious Disorder (OAD) was subsumed under Generalized Anxiety Disorder (GAD) in DSM-IV, three studies have investigated the overlap between the diagnoses. Although two studies have identified children meeting both OAD and GAD criteria (OAD/GAD group), a third study has identified children who met criteria for OAD, but not GAD (OAD group). Based on finding these two groups of children, we examined whether children in the OAD group (n= 30) could be differentiated from children in the OAD/GAD group (n=81) based on self and parent report of anxious symptoms and level of functional impairment. Conditional probability rates were also calculated for each of the DSM anxious symptoms to determine their overall clinicalutility. Findings revealed that the OAD group of children experienced fewer anxious symptoms than children in the OAD/GAD group, though both groups showed some amount of impairment. The implications for research and practice are discussed.
Resumo:
Integer programming, simulation, and rules of thumb have been integrated to develop a simulation-based heuristic for short-term assignment of fleet in the car rental industry. It generates a plan for car movements, and a set of booking limits to produce high revenue for a given planning horizon. Three different scenarios were used to validate the heuristic. The heuristic's mean revenue was significant higher than the historical ones, in all three scenarios. Time to run the heuristic for each experiment was within the time limits of three hours set for the decision making process even though it is not fully automated. These findings demonstrated that the heuristic provides better plans (plans that yield higher profit) for the dynamic allocation of fleet than the historical decision processes. Another contribution of this effort is the integration of IP and rules of thumb to search for better performance under stochastic conditions.
Resumo:
This thesis involves two parts. The first is a new-proposed theoretical approach called generalized atoms in molecules (GAIM). The second is a computational study on the deamination reaction of adenine with OH⁻/nH₂O (n=0, 1, 2, 3) and 3H₂O. The GAIM approach aims to solve the energy of each atom variationally in the first step and then to build the energy of a molecule from each atom. Thus the energy of a diatomic molecule (A-B) is formulated as a sum of its atomic energies, EA and EB. Each of these atomic energies is expressed as, EA = Hᴬ + Vₑₑᴬᴬ + 1/2Vₑₑᴬ<>ᴮ EB = Hᴮ + Vₑₑᴮᴮ + 1/2Vₑₑᴬ<>ᴮ where; Hᴬ and Hᴮ are the kinetic and nuclear attraction energy of electrons of atoms A and B, respectively; Vₑₑᴬᴬ and Vₑₑᴮᴮ are the interaction energy between the electrons on atoms A and B, respectively; and Vₑₑᴬ<>ᴮ is the interaction energy between the electrons of atom A with the electrons of atom B. The energy of the molecule is then minimized subject to the following constraint, |ρA(r)dr + |ρB(r)dr = N where ρA(r) and ρB(r) are the electron densities of atoms A and B, respectively, and N is the number of electrons. The initial testing of the performance of GAIM was done through calculating dissociation curves for H₂, LiH, Li₂, BH, HF, HCl, N₂, F₂, and Cl₂. The numerical results show that GAIM performs very well with H₂, LiH, Li₂, BH, HF, and HCl. GAIM shows convergence problems with N₂, F₂, and Cl₂ due to difficulties in reordering the degenerate atomic orbitals Pₓ, Py, and Pz in N, F, and Cl atoms. Further work for the development of GAIM is required. Deamination of adenine results in one of several forms of premutagenic lesions occurring in DNA. In this thesis, mechanisms for the deamination reaction of adenine with OH⁻/nH₂O, (n = 0, 1, 2, 3) and 3H₂O were investigated. HF/6-31G(d), B3LYP/6-31G(d), MP2/6-31G(d), and B3LYP/6-31+G(d) levels of theory were employed to optimize all the geometries. Energies were calculated at the G3MP2B3 and CBS-QB3 levels of theory. The effect of solvent (water) was computed using the polarizable continuum model (PCM). Intrinsic reaction coordinate (IRC) calculations were performed for all transition states. Five pathways were investigated for the deamination reaction of adenine with OH⁻/nH₂O and 3H₂O. The first four pathways (A-D) begin with by deprotonation at the amino group of adenine by OH⁻, while pathway E is initiated by tautomerization of adenine. For all pathways, the next two steps involve the formation of a tetrahedral intermediate followed by dissociation to yield products via a 1,3-hydrogen shift. Deamination with a single OH⁻ has a high activation barrier (190 kJ mol⁻¹ using G3MP2B3 level) for the rate-determining step. Addition of one water molecule reduces this barrier by 68 kJ mol⁻¹ calculated at G3MP2B3 level. Adding more water molecules decreases the overall activation energy of the reaction, but the effect becomes smaller with each additional water molecule. The most plausible mechanism is pathway E, the deamination reaction of adenine with 3H₂O, which has an overall G3MP2B3 activation energy of 139 and 137 kJ mol⁻¹ in the gas phase and PCM, respectively. This barrier is lower than that for the deamination with OH⁻/3H₂O by 6 and 2 kJ mol⁻¹ in the gas phase and PCM, respectively.
Resumo:
El presente trabajo consiste en dos partes diferenciadas: la principal de ellas (Cap tulos 1 y 2) est a dedicada a introducir estructura adicional en grupos que aparecen de manera natural en el contexto de la teor a de la forma. En la segunda parte (Cap tulo 3), se plantea c omo generalizar la teor a de espacios recubridores y, en particular, se propone una l nea de trabajo relacionada con la teor a de la forma. El punto de partida de esta tesis doctoral son los trabajos [25, 26, 68, 69, 70] en los que los autores introducen y utilizan algunas ultram etricas en el conjunto de los mor smos shape entre dos espacios topol ogicos punteados. En particular, si el dominio es (S1; 1); la construcci on realizada en [68] permite explicitar una ultram etrica en el grupo shape 1(X; x0) de un espacio m etrico compacto X; como ya fue observado en [69] y [80]. Si el espacio no es m etrico compacto, la construcci on nos lleva a utilizar el concepto de ultram etrica generalizada, en el sentido de Priess-Crampe y Ribenboim [78, 79]. En [7], D. K. Biss introduce la idea de topologizar el grupo fundamental de un espacio, de forma que la topolog a en 1(X; x0) sea una topolog a de grupo que permita detectar la (no) existencia de un recubridor universal para X: La forma de proceder sugerida es tomar en 1(X; x0)la toplog a cociente inducida por la topolog a compacto-abierta en el espacio de lazos (X; x0): Sin embargo, hay algunos errores en el art culo mencionado: en concreto, el error relacionado con el presente trabajo fue puesto de mani esto por P. Fabel en [33], mostrando que, en general, la operaci on de grupo en 1(X; x0)con la topolog a cociente no es continua. Utilizando un punto de vista similar, varios autores han tratado de dotar al grupo fundamental con una topolog a, de forma que 1(X; x0) sea un grupo topol ogico y la proyecci on q (X; x0){u100000} 1(X; x0)sea continua...
Resumo:
Esta tesis doctoral nace con el propósito de entender, analizar y sobre todo modelizar el comportamiento estadístico de las series financieras. En este sentido, se puede afirmar que los modelos que mejor recogen las especiales características de estas series son los modelos de heterocedasticidad condicionada en tiempo discreto,si los intervalos de tiempo en los que se recogen los datos lo permiten, y en tiempo continuo si tenemos datos diarios o datos intradía. Con esta finalidad, en esta tesis se proponen distintos estimadores bayesianos para la estimación de los parámetros de los modelos GARCH en tiempo discreto (Bollerslev (1986)) y COGARCH en tiempo continuo (Kluppelberg et al. (2004)). En el capítulo 1 se introducen las características de las series financieras y se presentan los modelos ARCH, GARCH y COGARCH, así como sus principales propiedades. Mandelbrot (1963) destacó que las series financieras no presentan estacionariedad y que sus incrementos no presentan autocorrelación, aunque sus cuadrados sí están correlacionados. Señaló también que la volatilidad que presentan no es constante y que aparecen clusters de volatilidad. Observó la falta de normalidad de las series financieras, debida principalmente a su comportamiento leptocúrtico, y también destacó los efectos estacionales que presentan las series, analizando como se ven afectadas por la época del año o el día de la semana. Posteriormente Black (1976) completó la lista de características especiales incluyendo los denominados leverage effects relacionados con como las fluctuaciones positivas y negativas de los precios de los activos afectan a la volatilidad de las series de forma distinta.
Resumo:
With increasing prevalence and capabilities of autonomous systems as part of complex heterogeneous manned-unmanned environments (HMUEs), an important consideration is the impact of the introduction of automation on the optimal assignment of human personnel. The US Navy has implemented optimal staffing techniques before in the 1990's and 2000's with a "minimal staffing" approach. The results were poor, leading to the degradation of Naval preparedness. Clearly, another approach to determining optimal staffing is necessary. To this end, the goal of this research is to develop human performance models for use in determining optimal manning of HMUEs. The human performance models are developed using an agent-based simulation of the aircraft carrier flight deck, a representative safety-critical HMUE. The Personnel Multi-Agent Safety and Control Simulation (PMASCS) simulates and analyzes the effects of introducing generalized maintenance crew skill sets and accelerated failure repair times on the overall performance and safety of the carrier flight deck. A behavioral model of four operator types (ordnance officers, chocks and chains, fueling officers, plane captains, and maintenance operators) is presented here along with an aircraft failure model. The main focus of this work is on the maintenance operators and aircraft failure modeling, since they have a direct impact on total launch time, a primary metric for carrier deck performance. With PMASCS I explore the effects of two variables on total launch time of 22 aircraft: 1) skill level of maintenance operators and 2) aircraft failure repair times while on the catapult (referred to as Phase 4 repair times). It is found that neither introducing a generic skill set to maintenance crews nor introducing a technology to accelerate Phase 4 aircraft repair times improves the average total launch time of 22 aircraft. An optimal manning level of 3 maintenance crews is found under all conditions, the point at which any additional maintenance crews does not reduce the total launch time. An additional discussion is included about how these results change if the operations are relieved of the bottleneck of installing the holdback bar at launch time.