895 resultados para Anchoring heuristic


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Direct bone marrow (BM) injection has been proposed as a strategy to bypass homing inefficiencies associated with intravenous (IV) hematopoietic stem cell (HSC) transplantation. Despite physical delivery into the BM cavity, many donor cells are rapidly redistributed by vascular perfusion, perhaps compromising efficacy. Anchoring donor cells to 3-dimensional (3D) multicellular spheroids, formed from mesenchymal stem/stromal cells (MSC) might improve direct BM transplantation. To test this hypothesis, relevant combinations of human umbilical cord blood-derived CD34(+) cells and BM-derived MSC were transplanted into NOD/SCID gamma (NSG) mice using either IV or intrafemoral (IF) routes. IF transplantation resulted in higher human CD45(+) and CD34(+) cell engraftment within injected femurs relative to distal femurs regardless of cell combination, but did not improve overall CD45(+) engraftment at 8 weeks. Analysis within individual mice revealed that despite engraftment reaching near saturation within the injected femur, engraftment at distal hematopoietic sites including peripheral blood, spleen and non-injected femur, could be poor. Our data suggest that the retention of human HSC within the BM following direct BM injection enhances local chimerism at the expense of systemic chimerism in this xenogeneic model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we are concerned with energy efficient area monitoring using information coverage in wireless sensor networks, where collaboration among multiple sensors can enable accurate sensing of a point in a given area-to-monitor even if that point falls outside the physical coverage of all the sensors. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The scheduling of sensor activity using the optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we are concerned with algorithms for scheduling the sensing activity of sensor nodes that are deployed to sense/measure point-targets in wireless sensor networks using information coverage. Defining a set of sensors which collectively can sense a target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC), which achieves longer network life compared to the set of covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) algorithm proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic technique that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Higher education is faced with the challenge of strengthening students competencies for the constantly evolving technology-mediated practices of knowledge work. The knowledge creation approach to learning (Paavola et al., 2004; Hakkarainen et al., 2004) provides a theoretical tool to address learning and teaching organized around complex problems and the development of shared knowledge objects, such as reports, products, and new practices. As in professional work practices, it appears necessary to design sufficient open-endedness and complexity for students teamwork in order to generate unpredictable and both practically and epistemologically challenging situations. The studies of the thesis examine what kinds of practices are observed when student teams engage in knowledge creating inquiry processes, how the students themselves perceive the process, and how to facilitate inquiry with technology-mediation, tutoring, and pedagogical models. Overall, 20 student teams collaboration processes and productions were investigated in detail. This collaboration took place in teams or small groups of 3-6 students from multiple domain backgrounds. Two pedagogical models were employed to provide heuristic guidance for the inquiry processes: the progressive inquiry model and the distributed project model. Design-based research methodology was employed in combination with case study as the research design. Database materials from the courses virtual learning environment constituted the main body of data, with additional data from students self-reflections and student and teacher interviews. Study I examined the role of technology mediation and tutoring in directing students knowledge production in a progressive inquiry process. The research investigated how the scale of scaffolding related to the nature of knowledge produced and the deepening of the question explanation process. In Study II, the metaskills of knowledge-creating inquiry were explored as a challenge for higher education: metaskills refers to the individual, collective, and object-centered aspects of monitoring collaborative inquiry. Study III examined the design of two courses and how the elaboration of shared objects unfolded based on the two pedagogical models. Study IV examined how the arranged concept-development project for external customers promoted practices of distributed, partially virtual, project work, and how the students coped with the knowledge creation challenge. Overall, important indicators of knowledge creating inquiry were the following: new versions of knowledge objects and artifacts demonstrated a deepening inquiry process; and the various productions were co-created through iterations of negotiations, drafting, and versioning by the team members. Students faced challenges of establishing a collective commitment, devising practices to co-author and advance their reports, dealing with confusion, and managing culturally diverse teams. The progressive inquiry model, together with tutoring and technology, facilitated asking questions, generating explanations, and refocusing lines of inquiry. The involvement of the customers was observed to provide a strong motivation for the teams. On the evidence, providing team-specific guidance, exposing students to models of scientific argumentation and expert work practices, and furnishing templates for the intended products appear to be fruitful ways to enhance inquiry processes. At the institutional level, educators do well to explore ways of developing collaboration with external customers, public organizations or companies, and between educational units in order to enhance educational practices of knowledge creating inquiry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study considers the scheduling problem observed in the burn-in operation of semiconductor final testing, where jobs are associated with release times, due dates, processing times, sizes, and non-agreeable release times and due dates. The burn-in oven is modeled as a batch-processing machine which can process a batch of several jobs as long as the total sizes of the jobs do not exceed the machine capacity and the processing time of a batch is equal to the longest time among all the jobs in the batch. Due to the importance of on-time delivery in semiconductor manufacturing, the objective measure of this problem is to minimize total weighted tardiness. We have formulated the scheduling problem into an integer linear programming model and empirically show its computational intractability. Due to the computational intractability, we propose a few simple greedy heuristic algorithms and meta-heuristic algorithm, simulated annealing (SA). A series of computational experiments are conducted to evaluate the performance of the proposed heuristic algorithms in comparison with exact solution on various small-size problem instances and in comparison with estimated optimal solution on various real-life large size problem instances. The computational results show that the SA algorithm, with initial solution obtained using our own proposed greedy heuristic algorithm, consistently finds a robust solution in a reasonable amount of computation time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integrating low dielectric permittivity (low-k) polymers to metals is an exacting fundamental challenge because poor bonding between low-polarizability moieties and metals precludes good interfacial adhesion. Conventional adhesion-enhancing methods such as using intermediary layers are unsuitable for engineering polymer/metal interfaces for many applications because of the collateral increase in dielectric permittivity. Here, we demonstrate a completely new approach without surface treatments or intermediary layers to obtain an excellent interfacial fracture toughness of > 13 J/m(2) in a model system comprising copper. and a cross-linked polycarbosilane with k similar to 2.7 obtained by curing a cyclolinear polycarbosilane in air.Our results suggest that interfacial oxygen catalyzed molecularring-opening and anchoring of the opened ring moieties of the polymer to copper is the main toughening mechanism. This novel approach of realizing adherent low-k polymer/metal structures without intermediary layers by activating metal-anchoring polymer moieties at the interface could be adapted for applications such as device wiring and packaging, and laminates and composites.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We view association of concepts as a complex network and present a heuristic for clustering concepts by taking into account the underlying network structure of their associations. Clusters generated from our approach are qualitatively better than clusters generated from the conventional spectral clustering mechanism used for graph partitioning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a general Linear Programming (LP) based formulation and solution methodology for obtaining optimal solution to the load distribution problem in divisible load scheduling. We exploit the power of the versatile LP formulation to propose algorithms that yield exact solutions to several very general load distribution problems for which either no solutions or only heuristic solutions were available. We consider both star (single-level tree) networks and linear daisy chain networks, having processors equipped with front-ends, that form the generic models for several important network topologies. We consider arbitrary processing node availability or release times and general models for communication delays and computation time that account for constant overheads such as start up times in communication and computation. The optimality of the LP based algorithms is proved rigorously.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We build upon the fixed point analysis and performance insights in [1]. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures.The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we study the performance of client-Access Point (AP) association policies in IEEE 802.11 based WLANs. In many scenarios, clients have a choice of APs with whom they can associate. We are interested in finding association policies which lead to optimal system performance. More specifically, we study the stability of different association policies as a function of the spatial distribution of arriving clients. We find for each policy the range of client arrival rates for which the system is stable. For small networks, we use Lyapunov function methods to formally establish the stability or instability of certain policies in specific scenarios. The RAT heuristic policy introduced in our prior work is shown to have very good stability properties when compared to several other natural policies. We also validate our analytical results by detailed simulation employing the IEEE 802.11 MAC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many IEEE 802.11 WLAN deployments, wireless clients have a choice of access points (AP) to connect to. In current systems, clients associate with the access point with the strongest signal to noise ratio. However, such an association mechanism can lead to unequal load sharing, resulting in diminished system performance. In this paper, we first provide a numerical approach based on stochastic dynamic programming to find the optimal client-AP association algorithm for a small topology consisting of two access points. Using the value iteration algorithm, we determine the optimal association rule for the two-AP topology. Next, utilizing the insights obtained from the optimal association ride for the two-AP case, we propose a near-optimal heuristic that we call RAT. We test the efficacy of RAT by considering more realistic arrival patterns and a larger topology. Our results show that RAT performs very well in these scenarios as well. Moreover, RAT lends itself to a fairly simple implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The physical design of a VLSI circuit involves circuit partitioning as a subtask. Typically, it is necessary to partition a large electrical circuit into several smaller circuits such that the total cross-wiring is minimized. This problem is a variant of the more general graph partitioning problem, and it is known that there does not exist a polynomial time algorithm to obtain an optimal partition. The heuristic procedure proposed by Kernighan and Lin1,2 requires O(n2 log2n) time to obtain a near-optimal two-way partition of a circuit with n modules. In the VLSI context, due to the large problem size involved, this computational requirement is unacceptably high. This paper is concerned with the hardware acceleration of the Kernighan-Lin procedure on an SIMD architecture. The proposed parallel partitioning algorithm requires O(n) processors, and has a time complexity of O(n log2n). In the proposed scheme, the reduced array architecture is employed with due considerations towards cost effectiveness and VLSI realizability of the architecture.The authors are not aware of any earlier attempts to parallelize a circuit partitioning algorithm in general or the Kernighan-Lin algorithm in particular. The use of the reduced array architecture is novel and opens up the possibilities of using this computing structure for several other applications in electronic design automation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the superconducting state of high Tc oxides, it is possible to conceive that the mobility of the charge carrier pairs is a consequence of the absence of a net chemical force on them. On this assumption, we have examined a heuristic relation between Tc and a simple function of electronegativities of constituent atoms. We find that Tc varies approximately linearly with the fractional electronegativity of all cations considered together.