854 resultados para Concept-based Retrieval
Resumo:
The outcome of this research is an Intelligent Retrieval System for Conditions of Contract Documents. The objective of the research is to improve the method of retrieving data from a computer version of a construction Conditions of Contract document. SmartDoc, a prototype computer system has been developed for this purpose. The system provides recommendations to aid the user in the process of retrieving clauses from the construction Conditions of Contract document. The prototype system integrates two computer technologies: hypermedia and expert systems. Hypermedia is utilized to provide a dynamic way for retrieving data from the document. Expert systems technology is utilized to build a set of rules that activate the recommendations to aid the user during the process of retrieval of clauses. The rules are based on experts knowledge. The prototype system helps the user retrieve related clauses that are not explicitly cross-referenced but, according to expert experience, are relevant to the topic that the user is interested in.
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a two-fold "custom wrapper" approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases.
Resumo:
An experimental setup to measure the three-dimensional phase-intensity distribution of an infrared laser beam in the focal region has been presented. It is based on the knife-edge method to perform a tomographic reconstruction and on a transport of intensity equation-based numerical method to obtain the propagating wavefront. This experimental approach allows us to characterize a focalized laser beam when the use of image or interferometer arrangements is not possible. Thus, we have recovered intensity and phase of an aberrated beam dominated by astigmatism. The phase evolution is fully consistent with that of the beam intensity along the optical axis. Moreover, this method is based on an expansion on both the irradiance and the phase information in a series of Zernike polynomials. We have described guidelines to choose a proper set of these polynomials depending on the experimental conditions and showed that, by abiding these criteria, numerical errors can be reduced.
Resumo:
This dissertation presents a study and experimental research on asymmetric coding of stereoscopic video. A review on 3D technologies, video formats and coding is rst presented and then particular emphasis is given to asymmetric coding of 3D content and performance evaluation methods, based on subjective measures, of methods using asymmetric coding. The research objective was de ned to be an extension of the current concept of asymmetric coding for stereo video. To achieve this objective the rst step consists in de ning regions in the spatial dimension of auxiliary view with di erent perceptual relevance within the stereo pair, which are identi ed by a binary mask. Then these regions are encoded with better quality (lower quantisation) for the most relevant ones and worse quality (higher quantisation) for the those with lower perceptual relevance. The actual estimation of the relevance of a given region is based on a measure of disparity according to the absolute di erence between views. To allow encoding of a stereo sequence using this method, a reference H.264/MVC encoder (JM) has been modi ed to allow additional con guration parameters and inputs. The nal encoder is still standard compliant. In order to show the viability of the method subjective assessment tests were performed over a wide range of objective qualities of the auxiliary view. The results of these tests allow us to prove 3 main goals. First, it is shown that the proposed method can be more e cient than traditional asymmetric coding when encoding stereo video at higher qualities/rates. The method can also be used to extend the threshold at which uniform asymmetric coding methods start to have an impact on the subjective quality perceived by the observers. Finally the issue of eye dominance is addressed. Results from stereo still images displayed over a short period of time showed it has little or no impact on the proposed method.
Resumo:
Efficient and effective approaches of dealing with the vast amount of visual information available nowadays are highly sought after. This is particularly the case for image collections, both personal and commercial. Due to the magnitude of these ever expanding image repositories, annotation of all images images is infeasible, and search in such an image collection therefore becomes inherently difficult. Although content-based image retrieval techniques have shown much potential, such approaches also suffer from various problems making it difficult to adopt them in practice. In this paper, we follow a different approach, namely that of browsing image databases for image retrieval. In our Honeycomb Image Browser, large image databases are visualised on a hexagonal lattice with image thumbnails occupying hexagons. Arranged in a space filling manner, visually similar images are located close together enabling large image datasets to be navigated in a hierarchical manner. Various browsing tools are incorporated to allow for interactive exploration of the database. Experimental results confirm that our approach affords efficient image retrieval. © 2010 IEEE.
Resumo:
Purpose: This paper extends the use of Radio Frequency Identification (RFID) data for accounting of warehouse costs and services. Time Driven Activity Based Costing (TDABC) methodology is enhanced with the real-time collected RFID data about duration of warehouse activities. This allows warehouse managers to have accurate and instant calculations of costs. The RFID enhanced TDABC (RFID-TDABC) is proposed as a novel application of the RFID technology. Research Approach: Application of RFID-TDABC in a warehouse is implemented on warehouse processes of a case study company. Implementation covers receiving, put-away, order picking, and despatching. Findings and Originality: RFID technology is commonly used for the identification and tracking items. The use of the RFID generated information with the TDABC can be successfully extended to the area of costing. This RFID-TDABC costing model will benefit warehouse managers with accurate and instant calculations of costs. Research Impact: There are still unexplored benefits to RFID technology in its applications in warehousing and the wider supply chain. A multi-disciplinary research approach led to combining RFID technology and TDABC accounting method in order to propose RFID-TDABC. Combining methods and theories from different fields with RFID, may lead researchers to develop new techniques such as RFID-TDABC presented in this paper. Practical Impact: RFID-TDABC concept will be of value to practitioners by showing how warehouse costs can be accurately measured by using this approach. Providing better understanding of incurred costs may result in a further optimisation of warehousing operations, lowering costs of activities, and thus provide competitive pricing to customers. RFID-TDABC can be applied in a wider supply chain.
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.
Resumo:
Smart hydrogels for biomedical applications are highly researched materials. However, integrating them into a device for implantation is difficult. This paper investigates an integrated delivery device designed to deliver an electro-responsive hydrogel to a target location inside a blood vessel with the purpose of creating an occlusion. The paper describes the synthesis and characterization of a Pluronic/methacrylic acid sodium salt electro-responsive hydrogel. Application of an electrical bias decelerates the expansion of the hydrogel. An integrated delivery system was manufactured to deliver the hydrogel to the target location in the body. Ex vivo and in vivo experiments in the carotid artery of sheep were used to validate the concept. The hydrogel was able to completely occlude the blood vessel reducing the blood flow from 245 to 0 ml/min after implantation. Ex vivo experiments showed that the hydrogel was able to withstand physiological blood pressures of > 270 mm·Hg without dislodgement. The results showed that the electro-responsive hydrogel used in this paper can be used to create a long-term occlusion in a blood vessel without any apparent side effects. The delivery system developed is a promising device for the delivery of electro-responsive hydrogels.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
N-Heterocycles are ubiquitous in biologically active natural products and pharmaceuticals. Yet, new syntheses and modifications of N-heterocycles are continually of interest for the purposes of expanding chemical space, finding quicker synthetic routes, better pharmaceuticals, and even new handles for molecular labeling. There are several iterations of molecular labeling; the decision of where to place the label is as important as of which visualization technique to emphasize.
Piperidine and indole are two of the most widely distributed N-heterocycles and thus were targeted for synthesis, functionalization, and labeling. The major functionalization of these scaffolds should include a nitrogen atom, while the inclusion of other groups will expand the utility of the method. Towards this goal, ease of synthesis and elimination of step-wise transformations are of the utmost concern. Here, the concept of electrophilic amination can be utilized as a way of introducing complex secondary and tertiary amines with minimal operations.
Molecular tags should be on or adjacent to an N-heterocycle as they are normally the motifs implicated at the binding site of enzymes and receptors. The labeling techniques should be useful to a chemical biologist, but should also in theory be useful to the medical community. The two types of labeling that are of interest to a chemist and a physician would be positron emission tomography (PET) and magnetic resonance imaging (MRI).
Coincidentally, the 3-positions of both piperidine and indole are historically difficult to access and modify. However, using electrophilic amination techniques, 3-functionalized piperidines can be synthesized in good yields from unsaturated amines. In the same manner, 3-labeled piperidines can be obtained; the piperidines can either be labeled with an azide for biochemical research or an 18F for PET imaging research. The novel electrophiles, N-benzenesulfonyloxyamides, can be reacted with indole in one of two ways: 3-amidation or 1-amidomethylation, depending on the exact reaction conditions. Lastly, a novel, hyperpolarizable 15N2-labeled diazirine has been developed as an exogenous and versatile tag for use in magnetic resonance imaging.
Resumo:
The amount and quality of available biomass is a key factor for the sustainable livestock industry and agricultural management related decision making. Globally 31.5% of land cover is grassland while 80% of Ireland’s agricultural land is grassland. In Ireland, grasslands are intensively managed and provide the cheapest feed source for animals. This dissertation presents a detailed state of the art review of satellite remote sensing of grasslands, and the potential application of optical (Moderate–resolution Imaging Spectroradiometer (MODIS)) and radar (TerraSAR-X) time series imagery to estimate the grassland biomass at two study sites (Moorepark and Grange) in the Republic of Ireland using both statistical and state of the art machine learning algorithms. High quality weather data available from the on-site weather station was also used to calculate the Growing Degree Days (GDD) for Grange to determine the impact of ancillary data on biomass estimation. In situ and satellite data covering 12 years for the Moorepark and 6 years for the Grange study sites were used to predict grassland biomass using multiple linear regression, Neuro Fuzzy Inference Systems (ANFIS) models. The results demonstrate that a dense (8-day composite) MODIS image time series, along with high quality in situ data, can be used to retrieve grassland biomass with high performance (R2 = 0:86; p < 0:05, RMSE = 11.07 for Moorepark). The model for Grange was modified to evaluate the synergistic use of vegetation indices derived from remote sensing time series and accumulated GDD information. As GDD is strongly linked to the plant development, or phonological stage, an improvement in biomass estimation would be expected. It was observed that using the ANFIS model the biomass estimation accuracy increased from R2 = 0:76 (p < 0:05) to R2 = 0:81 (p < 0:05) and the root mean square error was reduced by 2.72%. The work on the application of optical remote sensing was further developed using a TerraSAR-X Staring Spotlight mode time series over the Moorepark study site to explore the extent to which very high resolution Synthetic Aperture Radar (SAR) data of interferometrically coherent paddocks can be exploited to retrieve grassland biophysical parameters. After filtering out the non-coherent plots it is demonstrated that interferometric coherence can be used to retrieve grassland biophysical parameters (i. e., height, biomass), and that it is possible to detect changes due to the grass growth, and grazing and mowing events, when the temporal baseline is short (11 days). However, it not possible to automatically uniquely identify the cause of these changes based only on the SAR backscatter and coherence, due to the ambiguity caused by tall grass laid down due to the wind. Overall, the work presented in this dissertation has demonstrated the potential of dense remote sensing and weather data time series to predict grassland biomass using machine-learning algorithms, where high quality ground data were used for training. At present a major limitation for national scale biomass retrieval is the lack of spatial and temporal ground samples, which can be partially resolved by minor modifications in the existing PastureBaseIreland database by adding the location and extent ofeach grassland paddock in the database. As far as remote sensing data requirements are concerned, MODIS is useful for large scale evaluation but due to its coarse resolution it is not possible to detect the variations within the fields and between the fields at the farm scale. However, this issue will be resolved in terms of spatial resolution by the Sentinel-2 mission, and when both satellites (Sentinel-2A and Sentinel-2B) are operational the revisit time will reduce to 5 days, which together with Landsat-8, should enable sufficient cloud-free data for operational biomass estimation at a national scale. The Synthetic Aperture Radar Interferometry (InSAR) approach is feasible if there are enough coherent interferometric pairs available, however this is difficult to achieve due to the temporal decorrelation of the signal. For repeat-pass InSAR over a vegetated area even an 11 days temporal baseline is too large. In order to achieve better coherence a very high resolution is required at the cost of spatial coverage, which limits its scope for use in an operational context at a national scale. Future InSAR missions with pair acquisition in Tandem mode will minimize the temporal decorrelation over vegetation areas for more focused studies. The proposed approach complements the current paradigm of Big Data in Earth Observation, and illustrates the feasibility of integrating data from multiple sources. In future, this framework can be used to build an operational decision support system for retrieval of grassland biophysical parameters based on data from long term planned optical missions (e. g., Landsat, Sentinel) that will ensure the continuity of data acquisition. Similarly, Spanish X-band PAZ and TerraSAR-X2 missions will ensure the continuity of TerraSAR-X and COSMO-SkyMed.
Resumo:
Utilization of graphene covered waveguide inserts to form tunable waveguide resonators is theoretically explained and rigorously investigated by means of full-wave numerical electromagnetic simulations. Instead of using graphene-based switching elements, the concept we propose incorporates graphene sheets as parts of a resonator. Electrostatic tuning of the graphene surface conductivity leads to changes in the electromagnetic field boundary conditions at the resonator edges and surfaces, thus producing an effect similar to varying the electrical length of a resonator. The presented outline of the theoretical background serves to give phenomenological insight into the resonator behavior, but it can also be used to develop customized software tools for design and optimization of graphene-based resonators and filters. Due to the linear dependence of the imaginary part of the graphene surface impedance on frequency, the proposed concept was expected to become effective for frequencies above 100 GHz, which is confirmed by the numerical simulations. A frequency range from 100 GHz up to 1100 GHz, where the rectangular waveguides are used, is considered. Simple, all-graphene-based resonators are analyzed first, to assess the achievable tunability and to check the performance throughout the considered frequency range. Graphene–metal combined waveguide resonators are proposed in order to preserve the excellent quality factors typical for the type of waveguide discontinuities used. Dependence of resonator properties on key design parameters is studied in detail. Dependence of resonator properties throughout the frequency range of interest is studied using eight different waveguide sections appropriate for different frequency intervals. Proposed resonators are aimed at applications in the submillimeter-wave spectral region, serving as the compact tunable components for the design of bandpass filters and other devices.
Resumo:
The selected publications are focused on the relations between users, eGames and the educational context, and how they interact together, so that both learning and user performance are improved through feedback provision. A key part of this analysis is the identification of behavioural, anthropological patterns, so that users can be clustered based on their actions, and the steps taken in the system (e.g. social network, online community, or virtual campus). In doing so, we can analyse large data sets of information made by a broad user sample,which will provide more accurate statistical reports and readings. Furthermore, this research is focused on how users can be clustered based on individual and group behaviour, so that a personalized support through feedback is provided, and the personal learning process is improved as well as the group interaction. We take inputs from every person and from the group they belong to, cluster the contributions, find behavioural patterns and provide personalized feedback to the individual and the group, based on personal and group findings. And we do all this in the context of educational games integrated in learning communities and learning management systems. To carry out this research we design a set of research questions along the 10-year published work presented in this thesis. We ask if the users can be clustered together based on the inputs provided by them and their groups; if and how these data are useful to improve the learner performance and the group interaction; if and how feedback becomes a useful tool for such pedagogical goal; if and how eGames become a powerful context to deploy the pedagogical methodology and the various research methods and activities that make use of that feedback to encourage learning and interaction; if and how a game design and a learning design must be defined and implemented to achieve these objectives, and to facilitate the productive authoring and integration of eGames in pedagogical contexts and frameworks. We conclude that educational games are a resourceful tool to provide a user experience towards a better personalized learning performance and an enhance group interaction along the way. To do so, eGames, while integrated in an educational context, must follow a specific set of user and technical requirements, so that the playful context supports the pedagogical model underneath. We also conclude that, while playing, users can be clustered based on their personal behaviour and interaction with others, thanks to the pattern identification. Based on this information, a set of recommendations are provided Digital Anthropology and educational eGames 6 /216 to the user and the group in the form of personalized feedback, timely managed for an optimum impact on learning performance and group interaction level. In this research, Digital Anthropology is introduced as a concept at a late stage to provide a backbone across various academic fields including: Social Science, Cognitive Science, Behavioural Science, Educational games and, of course, Technology-enhance learning. Although just recently described as an evolution of traditional anthropology, this approach to digital behaviour and social structure facilitates the understanding amongst fields and a comprehensive view towards a combined approach. This research takes forward the already existing work and published research onusers and eGames for learning, and turns the focus onto the next step — the clustering of users based on their behaviour and offering proper, personalized feedback to the user based on that clustering, rather than just on isolated inputs from every user. Indeed, this pattern recognition in the described context of eGames in educational contexts, and towards the presented aim of personalized counselling to the user and the group through feedback, is something that has not been accomplished before.
Resumo:
In the crisis-prone and complex contemporary business environment, modern organisations and their supply chains, large and small, are challenged by crises more than ever. Knowledge management has been acknowledged as an important discipline able to support the management of complexity in times of crisis. However, the role of effective knowledge retrieval and sharing in the process of crisis prevention, management and survival has been relatively underexplored. In this paper, it is argued that organisational crises create additional challenges for knowledge management, mainly because complex, polymorphic and both structured and unstructured knowledge must be efficiently harnessed, processed and disseminated to the appropriate internal and external supply chain actors, under specific time constraints. In this perspective, a process-based approach is proposed to address the knowledge management needs of organisations during a crisis and to help management in establishing the necessary risk avoidance and recovery mechanisms. Finally, the proposed methodological approach is applied in a knowledge- intensive Greek small and medium enterprise from the pharmaceutical industry, producing empirical results, insights on knowledge pathologies during crises and relevant evaluations.
Resumo:
This article considers the opportunities of civilians to peacefully resist violent conflicts or civil wars. The argument developed here is based on a field-based research on the peace community San José de Apartadó in Colombia. The analytical and theoretical framework, which delimits the use of the term ‘resistance’ in this article, builds on the conceptual considerations of Hollander and Einwohner (2004) and on the theoretical concept of ‘rightful resistance’ developed by O’Brien (1996). Beginning with a conflict-analytical classification of the case study, we will describe the long-term socio-historical processes and the organizational experiences of the civilian population, which favoured the emergence of this resistance initiative. The analytical approach to the dimensions and aims of the resistance of this peace community leads to the differentiation of O`Brian’s concept of ‘rightful resistance’.