338 resultados para HDFS bottleneck
Resumo:
The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.
Resumo:
A key problem in object recognition is selection, namely, the problem of identifying regions in an image within which to start the recognition process, ideally by isolating regions that are likely to come from a single object. Such a selection mechanism has been found to be crucial in reducing the combinatorial search involved in the matching stage of object recognition. Even though selection is of help in recognition, it has largely remained unsolved because of the difficulty in isolating regions belonging to objects under complex imaging conditions involving occlusions, changing illumination, and object appearances. This thesis presents a novel approach to the selection problem by proposing a computational model of visual attentional selection as a paradigm for selection in recognition. In particular, it proposes two modes of attentional selection, namely, attracted and pay attention modes as being appropriate for data and model-driven selection in recognition. An implementation of this model has led to new ways of extracting color, texture and line group information in images, and their subsequent use in isolating areas of the scene likely to contain the model object. Among the specific results in this thesis are: a method of specifying color by perceptual color categories for fast color region segmentation and color-based localization of objects, and a result showing that the recognition of texture patterns on model objects is possible under changes in orientation and occlusions without detailed segmentation. The thesis also presents an evaluation of the proposed model by integrating with a 3D from 2D object recognition system and recording the improvement in performance. These results indicate that attentional selection can significantly overcome the computational bottleneck in object recognition, both due to a reduction in the number of features, and due to a reduction in the number of matches during recognition using the information derived during selection. Finally, these studies have revealed a surprising use of selection, namely, in the partial solution of the pose of a 3D object.
Resumo:
The memory hierarchy is the main bottleneck in modern computer systems as the gap between the speed of the processor and the memory continues to grow larger. The situation in embedded systems is even worse. The memory hierarchy consumes a large amount of chip area and energy, which are precious resources in embedded systems. Moreover, embedded systems have multiple design objectives such as performance, energy consumption, and area, etc. Customizing the memory hierarchy for specific applications is a very important way to take full advantage of limited resources to maximize the performance. However, the traditional custom memory hierarchy design methodologies are phase-ordered. They separate the application optimization from the memory hierarchy architecture design, which tend to result in local-optimal solutions. In traditional Hardware-Software co-design methodologies, much of the work has focused on utilizing reconfigurable logic to partition the computation. However, utilizing reconfigurable logic to perform the memory hierarchy design is seldom addressed. In this paper, we propose a new framework for designing memory hierarchy for embedded systems. The framework will take advantage of the flexible reconfigurable logic to customize the memory hierarchy for specific applications. It combines the application optimization and memory hierarchy design together to obtain a global-optimal solution. Using the framework, we performed a case study to design a new software-controlled instruction memory that showed promising potential.
Resumo:
TCP flows from applications such as the web or ftp are well supported by a Guaranteed Minimum Throughput Service (GMTS), which provides a minimum network throughput to the flow and, if possible, an extra throughput. We propose a scheme for a GMTS using Admission Control (AC) that is able to provide different minimum throughput to different users and that is suitable for "standard" TCP flows. Moreover, we consider a multidomain scenario where the scheme is used in one of the domains, and we propose some mechanisms for the interconnection with neighbor domains. The whole scheme uses a small set of packet classes in a core-stateless network where each class has a different discarding priority in queues assigned to it. The AC method involves only edge nodes and uses a special probing packet flow (marked as the highest discarding priority class) that is sent continuously from ingress to egress through a path. The available throughput in the path is obtained at the egress using measurements of flow aggregates, and then it is sent back to the ingress. At the ingress each flow is detected using an implicit way and then it is admission controlled. If it is accepted, it receives the GMTS and its packets are marked as the lowest discarding priority classes; otherwise, it receives a best-effort service. The scheme is evaluated through simulation in a simple "bottleneck" topology using different traffic loads consisting of "standard" TCP flows that carry files of varying sizes
Resumo:
Photo-mosaicing techniques have become popular for seafloor mapping in various marine science applications. However, the common methods cannot accurately map regions with high relief and topographical variations. Ortho-mosaicing borrowed from photogrammetry is an alternative technique that enables taking into account the 3-D shape of the terrain. A serious bottleneck is the volume of elevation information that needs to be estimated from the video data, fused, and processed for the generation of a composite ortho-photo that covers a relatively large seafloor area. We present a framework that combines the advantages of dense depth-map and 3-D feature estimation techniques based on visual motion cues. The main goal is to identify and reconstruct certain key terrain feature points that adequately represent the surface with minimal complexity in the form of piecewise planar patches. The proposed implementation utilizes local depth maps for feature selection, while tracking over several views enables 3-D reconstruction by bundle adjustment. Experimental results with synthetic and real data validate the effectiveness of the proposed approach
Resumo:
Contenido Introducción 1. Inteligencia emocional, liderazgo transformacional y género: factores que influencian el desempeño organizacional / Ana María Galindo Londoño, Sara Urrego Mayorga; Director: Juan Carlos Espinosa Méndez. 2. El rol de la mujer en el liderazgo / Andrea Patricia Cuestas Díaz; Directora: Francoise Venezia Contreras Torres. 3. Liderazgo transformacional, clima organizacional, satisfacción laboral y desempeño. Una revisión de la literatura / Juliana Restrepo Orozco, Ángela Marcela Ochoa Rodríguez; Directora: Françoise Venezia Contreras Torres. 4. “E-Leadership” una perspectiva al mundo de las compañías globalizadas / Ángela Beatriz Morales Morales, Mónica Natalia Aguilera Velandia; Director: Juan Carlos Espinosa. 5. Liderazgo y cultura. Una revisión / Daniel Alejandro Romero Galindo; Directora: Francoise Venezia Contreras Torres. 6. La investigación sobre la naturaleza del trabajo directivo: una revisión de la literatura / Julián Felipe Rodríguez Rivera, María Isabel Álvarez Rodríguez; Director: Juan Javier Saavedra Mayorga. 7. La mujer en la alta dirección en el contexto colombiano / Ana María Moreno, Juliana Moreno Jaramillo ; Directora: Françoise Venezia Contreras Torres. 8. Influencia de la personalidad en el discurso y liderazgo de George W. Bush después del 11 de septiembre de 2011 / Karen Eliana Mesa Torres; Director: Juan Carlos Espinosa. 9. La investigación sobre el campo del followership: una revisión de la literatura / Christian D. Báez Millán, Leidy J. Pinzón Porras; Director: Juan Javier Saavedra Mayorga. 10. El liderazgo desde la perspectiva del poder y la influencia. Una revisión de la literatura / Lina María García, Juan Sebastián Naranjo; Director: Juan Javier Saavedra Mayorga. 11. El trabajo directivo para líderes y gerentes: una visión integradora de los roles organizacionales / Lina Marcela Escobar Campos, Daniel Mora Barrero; Director: Rafael Piñeros. 12. Participación emocional en la toma de decisiones / Lina Rocío Poveda C., Gloria Johanna Rueda L.; Directora: Francoise Contreras T. 13. Estrés y su relación con el liderazgo / María Camila García Sierra, Diana Paola Rocha Cárdenas; Director: Juan Carlos Espinosa. 14. “Burnout y engagement” / María Paola Jaramillo Barrios, Natalia Rojas Mancipe; Director: Rafael Piñeros.
Resumo:
El sistema de fangs activats és el tractament biològic més àmpliament utilitzat arreu del món per la depuració d'aigües residuals. El seu funcionament depèn de la correcta operació tant del reactor biològic com del decantador secundari. Quan la fase de sedimentació no es realitza correctament, la biomassa no decantada s'escapa amb l'efluent causant un impacte sobre el medi receptor. Els problemes de separació de sòlids, són actualment una de les principals causes d'ineficiència en l'operació dels sistemes de fangs activats arreu del món. Inclouen: bulking filamentós, bulking viscós, escumes biològiques, creixement dispers, flòcul pin-point i desnitrificació incontrolada. L'origen dels problemes de separació generalment es troba en un desequilibri entre les principals comunitats de microorganismes implicades en la sedimentació de la biomassa: els bacteris formadors de flòcul i els bacteris filamentosos. Degut a aquest origen microbiològic, la seva identificació i control no és una tasca fàcil pels caps de planta. Els Sistemes de Suport a la Presa de Decisions basats en el coneixement (KBDSS) són un grup d'eines informàtiques caracteritzades per la seva capacitat de representar coneixement heurístic i tractar grans quantitats de dades. L'objectiu de la present tesi és el desenvolupament i validació d'un KBDSS específicament dissenyat per donar suport als caps de planta en el control dels problemes de separació de sòlids d'orígen microbiològic en els sistemes de fangs activats. Per aconseguir aquest objectiu principal, el KBDSS ha de presentar les següents característiques: (1) la implementació del sistema ha de ser viable i realista per garantir el seu correcte funcionament; (2) el raonament del sistema ha de ser dinàmic i evolutiu per adaptar-se a les necessitats del domini al qual es vol aplicar i (3) el raonament del sistema ha de ser intel·ligent. En primer lloc, a fi de garantir la viabilitat del sistema, s'ha realitzat un estudi a petita escala (Catalunya) que ha permès determinar tant les variables més utilitzades per a la diagnosi i monitorització dels problemes i els mètodes de control més viables, com la detecció de les principals limitacions que el sistema hauria de resoldre. Els resultats d'anteriors aplicacions han demostrat que la principal limitació en el desenvolupament de KBDSSs és l'estructura de la base de coneixement (KB), on es representa tot el coneixement adquirit sobre el domini, juntament amb els processos de raonament a seguir. En el nostre cas, tenint en compte la dinàmica del domini, aquestes limitacions es podrien veure incrementades si aquest disseny no fos òptim. En aquest sentit, s'ha proposat el Domino Model com a eina per dissenyar conceptualment el sistema. Finalment, segons el darrer objectiu referent al seguiment d'un raonament intel·ligent, l'ús d'un Sistema Expert (basat en coneixement expert) i l'ús d'un Sistema de Raonament Basat en Casos (basat en l'experiència) han estat integrats com els principals sistemes intel·ligents encarregats de dur a terme el raonament del KBDSS. Als capítols 5 i 6 respectivament, es presenten el desenvolupament del Sistema Expert dinàmic (ES) i del Sistema de Raonament Basat en Casos temporal, anomenat Sistema de Raonament Basat en Episodis (EBRS). A continuació, al capítol 7, es presenten detalls de la implementació del sistema global (KBDSS) en l'entorn G2. Seguidament, al capítol 8, es mostren els resultats obtinguts durant els 11 mesos de validació del sistema, on aspectes com la precisió, capacitat i utilitat del sistema han estat validats tant experimentalment (prèviament a la implementació) com a partir de la seva implementació real a l'EDAR de Girona. Finalment, al capítol 9 s'enumeren les principals conclusions derivades de la present tesi.
Resumo:
Three naming strategies are discussed that allow the processes of a distributed application to continue being addressed by their original logical name, along all the migrations they may be forced to undertake because of performance-improvement goals. A simple centralised solution is firstly discussed which showed a software bottleneck with the increase of the number of processes; other two solutions are considered that entail different communication schemes and different communication overheads for the naming protocol. All these strategies are based on the facility that each process is allowed to survive after migration, even in its original site, only to provide a forwarding service to those communications that used its obsolete address.
Resumo:
Recently, two approaches have been introduced that distribute the molecular fragment mining problem. The first approach applies a master/worker topology, the second approach, a completely distributed peer-to-peer system, solves the scalability problem due to the bottleneck at the master node. However, in many real world scenarios the participating computing nodes cannot communicate directly due to administrative policies such as security restrictions. Thus, potential computing power is not accessible to accelerate the mining run. To solve this shortcoming, this work introduces a hierarchical topology of computing resources, which distributes the management over several levels and adapts to the natural structure of those multi-domain architectures. The most important aspect is the load balancing scheme, which has been designed and optimized for the hierarchical structure. The approach allows dynamic aggregation of heterogenous computing resources and is applied to wide area network scenarios.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
A two-sector Ramsey-type model of growth is developed to investigate the relationship between agricultural productivity and economy-wide growth. The framework takes into account the peculiarities of agriculture both in production ( reliance on a fixed natural resource base) and in consumption (life-sustaining role and low income elasticity of food demand). The transitional dynamics of the model establish that when preferences respect Engel's law, the level and growth rate of agricultural productivity influence the speed of capital accumulation. A calibration exercise shows that a small difference in agricultural productivity has drastic implications for the rate and pattern of growth of the economy. Hence, low agricultural productivity can form a bottleneck limiting growth, because high food prices result in a low saving rate.
Resumo:
Background: Patterns of mtDNA variation within a species reflect long-term population structure, but may also be influenced by maternally inherited endosymbionts, such as Wolbachia. These bacteria often alter host reproductive biology and can drive particular mtDNA haplotypes through populations. We investigated the impacts of Wolbachia infection and geography on mtDNA variation in the diamondback moth, a major global pest whose geographic distribution reflects both natural processes and transport via human agricultural activities. Results: The mtDNA phylogeny of 95 individuals sampled from 10 countries on four continents revealed two major clades. One contained only Wolbachia-infected individuals from Malaysia and Kenya, while the other contained only uninfected individuals, from all countries including Malaysia and Kenya. Within the uninfected group was a further clade containing all individuals from Australasia and displaying very limited sequence variation. In contrast, a biparental nuclear gene phylogeny did not have infected and uninfected clades, supporting the notion that maternally-inherited Wolbachia are responsible for the mtDNA pattern. Only about 5% (15/306) of our global sample of individuals was infected with the plutWBI isolate and even within infected local populations, many insects were uninfected. Comparisons of infected and uninfected isofemale lines revealed that plutWBI is associated with sex ratio distortion. Uninfected lines have a 1:1 sex ratio, while infected ones show a 2:1 female bias. Conclusion: The main correlate of mtDNA variation in P. xylostella is presence or absence of the plutWBI infection. This is associated with substantial sex ratio distortion and the underlying mechanisms deserve further study. In contrast, geographic origin is a poor predictor of moth mtDNA sequences, reflecting human activity in moving the insects around the globe. The exception is a clade of Australasian individuals, which may reflect a bottleneck during their recent introduction to this region.
High throughput, high resolution selection of polymorphic microsatellite loci for multiplex analysis
Resumo:
Background Large-scale genetic profiling, mapping and genetic association studies require access to a series of well-characterised and polymorphic microsatellite markers with distinct and broad allele ranges. Selection of complementary microsatellite markers with non-overlapping allele ranges has historically proved to be a bottleneck in the development of multiplex microsatellite assays. The characterisation process for each microsatellite locus can be laborious and costly given the need for numerous, locus-specific fluorescent primers. Results Here, we describe a simple and inexpensive approach to select useful microsatellite markers. The system is based on the pooling of multiple unlabelled PCR amplicons and their subsequent ligation into a standard cloning vector. A second round of amplification utilising generic labelled primers targeting the vector and unlabelled locus-specific primers targeting the microsatellite flanking region yield allelic profiles that are representative of all individuals contained within the pool. Suitability of various DNA pool sizes was then tested for this purpose. DNA template pools containing between 8 and 96 individuals were assessed for the determination of allele ranges of individual microsatellite markers across a broad population. This helped resolve the balance between using pools that are large enough to allow the detection of many alleles against the risk of including too many individuals in a pool such that rare alleles are over-diluted and so do not appear in the pooled microsatellite profile. Pools of DNA from 12 individuals allowed the reliable detection of all alleles present in the pool. Conclusion The use of generic vector-specific fluorescent primers and unlabelled locus-specific primers provides a high resolution, rapid and inexpensive approach for the selection of highly polymorphic microsatellite loci that possess non-overlapping allele ranges for use in large-scale multiplex assays.
Time-resolved gas-phase kinetic and quantum chemical studies of the reaction of silylene with oxygen
Resumo:
Time-resolved kinetic studies of the reaction of silylene, SiH2, generated by laser flash photolysis of phenylsilane, have been carried out to obtain rate constants for its bimolecular reaction with O-2. The reaction was studied in the gas phase over the pressure range 1-100 Torr in SF6 bath gas, at five temperatures in the range 297-600 K. The second order rate constants at 10 Torr were fitted to the Arrhenius equation: log(k/cm(3) molecule(-1) s(-1)) = (-11.08 +/- 0.04) + (1.57 +/- 0.32 kJ mol(-1))/RT ln10 The decrease in rate constant values with increasing temperature, although systematic is very small. The rate constants showed slight increases in value with pressure at each temperature, but this was scarcely beyond experimental uncertainty. From estimates of Lennard-Jones collision rates, this reaction is occurring at ca. 1 in 20 collisions, almost independent of pressure and temperature. Ab initio calculations at the G3 level backed further by multi-configurational (MC) SCF calculations, augmented by second order perturbation theory (MRMP2), support a mechanism in which the initial adduct, H2SiOO, formed in the triplet state (T), undergoes intersystem crossing to the more stable singlet state (S) prior to further low energy isomerisation processes leading, via a sequence of steps, ultimately to dissociation products of which the lowest energy pair are H2O + SiO. The decomposition of the intermediate cyclo-siladioxirane, via O-O bond fission, plays an important role in the overall process. The bottleneck for the overall process appears to be the T -> S process in H2SiOO. This process has a small spin orbit coupling matrix element, consistent with an estimate of its rate constant of 1 x 10(9) s(-1) obtained with the aid of RRKM theory. This interpretation preserves the idea that, as in its reactions in general, SiH2 initially reacts at the encounter rate with O-2. The low values for the secondary reaction barriers on the potential energy surface account for the lack of an observed pressure dependence. Some comparisons are drawn with the reactions of CH2 + O-2 and SiCl2 + O-2.
Resumo:
The past decade has witnessed explosive growth of mobile subscribers and services. With the purpose of providing better-swifter-cheaper services, radio network optimisation plays a crucial role but faces enormous challenges. The concept of Dynamic Network Optimisation (DNO), therefore, has been introduced to optimally and continuously adjust network configurations, in response to changes in network conditions and traffic. However, the realization of DNO has been seriously hindered by the bottleneck of optimisation speed performance. An advanced distributed parallel solution is presented in this paper, as to bridge the gap by accelerating the sophisticated proprietary network optimisation algorithm, while maintaining the optimisation quality and numerical consistency. The ariesoACP product from Arieso Ltd serves as the main platform for acceleration. This solution has been prototyped, implemented and tested. Real-project based results exhibit a high scalability and substantial acceleration at an average speed-up of 2.5, 4.9 and 6.1 on a distributed 5-core, 9-core and 16-core system, respectively. This significantly outperforms other parallel solutions such as multi-threading. Furthermore, augmented optimisation outcome, alongside high correctness and self-consistency, have also been fulfilled. Overall, this is a breakthrough towards the realization of DNO.