988 resultados para integer disaggregation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to the ongoing trend towards increased product variety, fast-moving consumer goods such as food and beverages, pharmaceuticals, and chemicals are typically manufactured through so-called make-and-pack processes. These processes consist of a make stage, a pack stage, and intermediate storage facilities that decouple these two stages. In operations scheduling, complex technological constraints must be considered, e.g., non-identical parallel processing units, sequence-dependent changeovers, batch splitting, no-wait restrictions, material transfer times, minimum storage times, and finite storage capacity. The short-term scheduling problem is to compute a production schedule such that a given demand for products is fulfilled, all technological constraints are met, and the production makespan is minimised. A production schedule typically comprises 500–1500 operations. Due to the problem size and complexity of the technological constraints, the performance of known mixed-integer linear programming (MILP) formulations and heuristic approaches is often insufficient. We present a hybrid method consisting of three phases. First, the set of operations is divided into several subsets. Second, these subsets are iteratively scheduled using a generic and flexible MILP formulation. Third, a novel critical path-based improvement procedure is applied to the resulting schedule. We develop several strategies for the integration of the MILP model into this heuristic framework. Using these strategies, high-quality feasible solutions to large-scale instances can be obtained within reasonable CPU times using standard optimisation software. We have applied the proposed hybrid method to a set of industrial problem instances and found that the method outperforms state-of-the-art methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The near-real time retrieval of low stratiform cloud (LSC) coverage is of vital interest for such disciplines as meteorology, transport safety, economy and air quality. Within this scope, a novel methodology is proposed which provides the LSC occurrence probability estimates for a satellite scene. The algorithm is suited for the 1 × 1 km Advanced Very High Resolution Radiometer (AVHRR) data and was trained and validated against collocated SYNOP observations. Utilisation of these two combined data sources requires a formulation of constraints in order to discriminate cases where the LSC is overlaid by higher clouds. The LSC classification process is based on six features which are first converted to the integer form by step functions and combined by means of bitwise operations. Consequently, a set of values reflecting a unique combination of those features is derived which is further employed to extract the LSC occurrence probability estimates from the precomputed look-up vectors (LUV). Although the validation analyses confirmed good performance of the algorithm, some inevitable misclassification with other optically thick clouds were reported. Moreover, the comparison against Polar Platform System (PPS) cloud-type product revealed superior classification accuracy. From the temporal perspective, the acquired results reported a presence of diurnal and annual LSC probability cycles over Europe.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND No reliable tool to predict outcome of acute kidney injury (AKI) exists. HYPOTHESIS A statistically derived scoring system can accurately predict outcome in dogs with AKI managed with hemodialysis. ANIMALS One hundred and eighty-two client-owned dogs with AKI. METHODS Logistic regression analyses were performed initially on clinical variables available on the 1st day of hospitalization for relevance to outcome. Variables with P< or = .1 were considered for further analyses. Continuous variables outside the reference range were divided into quartiles to yield quartile-specific odds ratios (ORs) for survival. Models were developed by incorporating weighting factors assigned to each quartile based on the OR, using either the integer value of the OR (Model A) or the exact OR (Models B or C, when the etiology was known). A predictive score for each model was calculated for each dog by summing all weighting factors. In Model D, actual values for continuous variables were used in a logistic regression model. Receiver-operating curve analyses were performed to assess sensitivities, specificities, and optimal cutoff points for all models. RESULTS Higher scores were associated with decreased probability of survival (P < .001). Models A, B, C, and D correctly classified outcomes in 81, 83, 87, and 76% of cases, respectively, and optimal sensitivities/specificities were 77/85, 81/85, 83/90 and 92/61%, respectively. CONCLUSIONS AND CLINICAL RELEVANCE The models allowed outcome prediction that corresponded with actual outcome in our cohort. However, each model should be validated further in independent cohorts. The models may also be useful to assess AKI severity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with an event-bus tour booked by Bollywood film fans. During the tour, the participants visit selected locations of famous Bollywood films at various sites in Switzerland. Moreover, the tour includes stops for lunch and shopping. Each day, up to five buses operate the tour; for organizational reasons, two or more buses cannot stay at the same location simultaneously. The planning problem is how to compute a feasible schedule for each bus such that the total waiting time (primary objective) and the total travel time (secondary objective) are minimized. We formulate this problem as a mixed-integer linear program, and we report on computational results obtained with the Gurobi solver.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electron-microprobe analysis, single-crystal X-ray diffraction with an area detector, and high-resolution transmission electron microscopy show that minerals related to wagnerite, triplite and triploidite, which are monoclinic Mg, Fe and Mn phosphates with the formula Me2+ 2PO4(F,OH), constitute a modulated series based on the average triplite structure. Modulation occurs along b and may be commensurate with (2b periodicity) or incommensurate but generally close to integer values (∼3b, ∼5b, ∼7b, ∼9b), i.e. close to polytypic behaviour. As a result, the Mg- and F-dominant minerals magniotriplite and wagnerite can no longer be considered polymorphs of Mg2PO4F, i.e., there is no basis for recognizing them as distinct species. Given that wagnerite has priority (1821 vs. 1951), the name magniotriplite should be discarded in favour of wagnerite. Hydroxylwagnerite, end-member Mg2PO4OH, occurs in pyrope megablasts along with talc, clinochlore, kyanite, rutile and secondary apatite in two samples from lenses of pyrope–kyanite–phengite–quartz-schist within metagranite in the coesite-bearing ultrahigh-pressure metamorphic unit of the Dora-Maira Massif, western Alps, Vallone di Gilba, Val Varaita, Piemonte, Italy. Electron microprobe analyses of holotype hydroxylwagnerite and of the crystal with the lowest F content gave in wt%: P2O5 44.14, 43.99; SiO2 0.28, 0.02; SO3 –, 0.01; TiO2 0.20, 0.16; Al2O3 0.06, 0.03; MgO 48.82, 49.12; FeO 0.33, 0.48; MnO 0.01, 0.02; CaO 0.12, 0.10; Na2O 0.01, –; F 5.58, 4.67; H2O (calc) 2.94, 3.36; –O = F 2.35, 1.97; Sum 100.14, 99.98, corresponding to (Mg1.954Fe0.007Ca0.003Ti0.004Al0.002Na0.001)Σ=1.971(P1.003Si0.008)Σ=1.011O4(OH0.526F0.474)Σ=1 and (Mg1.971Fe0.011Ca0.003Ti0.003Al0.001)Σ=1.989(P1.002Si0.001)Σ=1.003O4(OH0.603F0.397)Σ=1, respectively. Due to the paucity of material, H2O could not be measured, so OH was calculated from the deficit in F assuming stoichiometry, i.e., by assuming F + OH = 1 per formula unit. Holotype hydroxylwagnerite is optically biaxial (+), α 1.584(1), β 1.586(1), γ 1.587(1) (589 nm); 2V Z(meas.) = 43(2)°; orientation Y = b. Single-crystal X-ray diffraction gives monoclinic symmetry, space group P21/c, a = 9.646(3) Å, b = 12.7314(16) Å, c = 11.980(4) Å, β = 108.38(4) , V = 1396.2(8) Å3, Z = 16, i.e., hydroxylwagnerite is the OH-dominant analogue of wagnerite [β-Mg2PO4(OH)] and a high-pressure polymorph of althausite, holtedahlite, and α- and ε-Mg2PO4(OH). We suggest that the group of minerals related to wagnerite, triplite and triploidite constitutes a triplite–triploidite super-group that can be divided into F-dominant phosphates (triplite group), OH-dominant phosphates (triploidite group), O-dominant phosphates (staněkite group) and an OH-dominant arsenate (sarkinite). The distinction among the three groups and a potential fourth group is based only on chemical features, i.e., occupancy of anion or cation sites. The structures of these minerals are all based on the average triplite structure, with a modulation controlled by the ratio of Mg, Fe2+, Fe3+ and Mn2+ ionic radii to (O,OH,F) ionic radii.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We calculate the anomalous dimensions of operators with large global charge J in certain strongly coupled conformal field theories in three dimensions, such as the O(2) model and the supersymmetric fixed point with a single chiral superfield and a W = Φ3 superpotential. Working in a 1/J expansion, we find that the large-J sector of both examples is controlled by a conformally invariant effective Lagrangian for a Goldstone boson of the global symmetry. For both these theories, we find that the lowest state with charge J is always a scalar operator whose dimension ΔJ satisfies the sum rule J2ΔJ−(J22+J4+316)ΔJ−1−(J22+J4+316)ΔJ+1=0.04067 up to corrections that vanish at large J . The spectrum of low-lying excited states is also calculable explcitly: for example, the second-lowest primary operator has spin two and dimension ΔJ+3√. In the supersymmetric case, the dimensions of all half-integer-spin operators lie above the dimensions of the integer-spin operators by a gap of order J+12. The propagation speeds of the Goldstone waves and heavy fermions are 12√ and ±12 times the speed of light, respectively. These values, including the negative one, are necessary for the consistent realization of the superconformal symmetry at large J.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Libraries of learning objects may serve as basis for deriving course offerings that are customized to the needs of different learning communities or even individuals. Several ways of organizing this course composition process are discussed. Course composition needs a clear understanding of the dependencies between the learning objects. Therefore we discuss the metadata for object relationships proposed in different standardization projects and especially those suggested in the Dublin Core Metadata Initiative. Based on these metadata we construct adjacency matrices and graphs. We show how Gozinto-type computations can be used to determine direct and indirect prerequisites for certain learning objects. The metadata may also be used to define integer programming models which can be applied to support the instructor in formulating his specifications for selecting objects or which allow a computer agent to automatically select learning objects. Such decision models could also be helpful for a learner navigating through a library of learning objects. We also sketch a graph-based procedure for manual or automatic sequencing of the learning objects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Musik von Carl Maria von Weber. Gedicht von Friedrich Kind. Nach Friedrich Kinds Jubel Cantate gedichtet von A. Wendt

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Public preferences for policy are formed in a little-understood process that is not adequately described by traditional economic theory of choice. In this paper I suggest that U.S. aggregate support for health reform can be modeled as tradeoffs among a small number of behavioral values and the stage of policy development. The theory underlying the model is based on Samuelson, et al.'s (1986) work and Wilke's (1991) elaboration of it as the Greed/Efficiency/Fairness (GEF) hypothesis of motivation in the management of resource dilemmas, and behavioral economics informed by Kahneman and Thaler's prospect theory. ^ The model developed in this paper employs ordered probit econometric techniques applied to data derived from U.S. polls taken from 1990 to mid-2003 that measured support for health reform proposals. Outcome data are four-tiered Likert counts; independent variables are dummies representing the presence or absence of operationalizations of each behavioral variable, along with an integer representing policy process stage. Marginal effects of each independent variable predict how support levels change on triggering that variable. Model estimation results indicate a vanishingly small likelihood that all coefficients are zero and all variables have signs expected from model theory. ^ Three hypotheses were tested: support will drain from health reform policy as it becomes increasingly well-articulated and approaches enactment; reforms appealing to fairness through universal health coverage will enjoy a higher degree of support than those targeted more narrowly; health reforms calling for government operation of the health finance system will achieve lower support than those that do not. Model results support the first and last hypotheses. Contrary to expectations, universal health care proposals did not provide incremental support beyond those targeted to “deserving” populations—children, elderly, working families. In addition, loss of autonomy (e.g. restrictions on choice of care giver) is found to be the “third rail” of health reform with significantly-reduced support. When applied to a hypothetical health reform in which an employer-mandated Medical Savings Account policy is the centerpiece, the model predicts support that may be insufficient to enactment. These results indicate that the method developed in the paper may prove valuable to health policy designers. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Digital terrain models (DTM) typically contain large numbers of postings, from hundreds of thousands to billions. Many algorithms that run on DTMs require topological knowledge of the postings, such as finding nearest neighbors, finding the posting closest to a chosen location, etc. If the postings are arranged irregu- larly, topological information is costly to compute and to store. This paper offers a practical approach to organizing and searching irregularly-space data sets by presenting a collection of efficient algorithms (O(N),O(lgN)) that compute important topological relationships with only a simple supporting data structure. These relationships include finding the postings within a window, locating the posting nearest a point of interest, finding the neighborhood of postings nearest a point of interest, and ordering the neighborhood counter-clockwise. These algorithms depend only on two sorted arrays of two-element tuples, holding a planimetric coordinate and an integer identification number indicating which posting the coordinate belongs to. There is one array for each planimetric coordinate (eastings and northings). These two arrays cost minimal overhead to create and store but permit the data to remain arranged irregularly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A characterization of a property of binary relations is of finite type if it is stated in terms of ordered T-tuples of alternatives for some positive integer T. A characterization of finite type can be used to determine in polynomial time whether a binary relation over a finite set has the property characterized. Unfortunately, Pareto representability in R2 has no characterization of finite type (Knoblauch, 2002). This result is generalized below Rl, l larger than 2. The method of proof is applied to other properties of binary relations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

IBAMar (http://www.ba.ieo.es/ibamar) is a regional database that puts together all physical and biochemical data obtained by multiparametric probes (CTDs equipped with different sensors), during the cruises managed by the Balearic Center of the Spanish Institute of Oceanography (COB-IEO). It has been recently extended to include data obtained with classical hydro casts using oceanographic Niskin or Nansen bottles. The result is a database that includes a main core of hydrographic data: temperature (T), salinity (S), dissolved oxygen (DO), fluorescence and turbidity; complemented by bio-chemical data: dissolved inorganic nutrients (phosphate, nitrate, nitrite and silicate) and chlorophyll-a. In IBAMar Database, different technologies and methodologies were used by different teams along the four decades of data sampling in the COB-IEO. Despite of this fact, data have been reprocessed using the same protocols, and a standard QC has been applied to each variable. Therefore it provides a regional database of homogeneous, good quality data. Data acquisition and quality control (QC): 94% of the data are CTDs Sbe911 and Sbe25. S and DO were calibrated on board using water samples, whenever a Rossetta was available (70% of the cases). All CTD data from Seabird CTDs were reviewed and post processed with the software provided by Sea-Bird Electronics. Data were averaged to get 1 dbar vertical resolution. General sampling methodology and pre processing are described in https://ibamardatabase.wordpress.com/home/). Manual QC include visual checks of metadata, duplicate data and outliers. Automatic QC include range check of variables by area (north of Balearic Islands, south of BI and Alboran Sea) and depth (27 standard levels), check for spikes and check for density inversions. Nutrients QC includes a preliminary control and a range check on the observed level of the data to detect outliers around objectively analyzed data fields. A quality flag is assigned as an integer number, depending on the result of the QC check.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ocean Drilling Program (ODP) Leg 210 is one of very few deep-sea legs drilled along the eastern Canadian continental margin. Most other drilling on this margin has been carried out by the petroleum industry on the shallow-water regions of the Scotian shelf and the Grand Banks (see Doeven, 1983, for nannofossil studies). Deep Sea Drilling Project (DSDP) Leg 12 Site 111 and ODP Leg 105 Site 647 were drilled in the general vicinity of Leg 210 but recovered no appreciable Lower Cretaceous (Albian-Cenomanian) sediments. Site 111 yielded indurated limestones dated tentatively as late Albian-early Cenomanian, whereas Site 647 encountered no Albian-Cenomanian sediments. Two sites (Sites 1276 and 1277) were drilled during Leg 210 in the Newfoundland Basin with the primary objective of recovering basement rocks to elucidate the rifting history of the North Atlantic Basin. The location for Leg 210 was selected because it is conjugate to the Iberia margin, which was drilled extensively during DSDP/ODP Legs 47B, 103, 149, and 173. A secondary but equally important objective was to recover the overlying sediments with the purpose of studying the postrift sedimentation history of this margin.