We all know that the sacred is present in our lives, but sometimes we cannot understand it and perceive it. This paper represents and endeavor of synthesizing knowledge from the field of arts, literature, art criticism and history of religions, starting with the conception of the sacred from the book The Visual Representation of the Sacred, written by Adrian Stoleriu. The paper has 3 main sections: the Sacred, the Sacred in Literature and the Sacred in Contemporary Works of Art.
Firstly, the paper defines the word “sacred” as not being related to literature or arts. But, as seen in the next lines, we realize that this word has a very large meaning. Still, the author offers broad perspectives on the Romanian, English and French definitions on the term, also decribing Rudolf Otto’s perspective, a German theologian and philosopher, and one of Mircea Eliade, a Romanian historian of religions. The author focused on the Romanian-French-English perspective of the concept and reveals the origins of the word “sacred”, linking it to its meanings.
It is stated that one of the most complex definition was given by Mircea Eliade. He admitted that hierophany is the main modality of knowing the sacred and the essence of his vision is represented by homo religious. As we read the next lines, we notice a link to Adrian Stoleriu’s work, who states that the individual is at the interference of the spheres of the sacred with the one of the profane. Later on, we find out what is the difference between the sacred and the profane according to Durkheim.
Secondly, the paper refers to the manifestation of the sacred in literature. The works which are discussed are Lucrarea – 2004 (The Work) and Se întorc morții acasă – 2014 (The Dead are Coming Back Home), written by Cornel Constantin Ciomâzgă. The first book is presented as a novel which has fragments that underline the steps of spritual transformation one should make in order to get closer to God. In the second one, the literary dimension of the sacred is represented by the words related to the beauty of love.
And, finally, the paper links the sacred to the contemporary works of art. We know that the sacred had a great influence in art, but here we have to focus on certain works, for example, on an article that appeared in Anastasis. Research in Medieval Culture and Art, called “Sacred Symbols in Dimitrie Gavrilean’s Paintings”. Dimitrie Gavrilean was a contemporary painter who focused on Romanian fundamental myths, ancestral myth and recently Christianized myths. His work reflects his deep attachment towards the principles of the Christian iconography of Byzantine tradition. One of his works which can be seen as an example of sacred art marked by the presence of angels is “Cel Vechi de zile” (The Ageless One).
Concluding, this article shapes the differencies between the existing definitions of the sacred, but also states the distinction between the sacred and the profane and these things are proven by describing and analyzing works from literature and art.
Read more here: http://www.edusoft.ro/brain/index.php/libri/article/view/642
The paper written by Răzvan Bogdan from the Department of Computers and Information Technology, Politehnica University of Timisoara, includes a presentation of a modality of integrating Embedded Systems Massive Open Online Courses (MOOCs) into blended courses. More than that, it also provides an evaluation of this approach: the sentiment analysis technique.
Starting with the explanation of MOOCs, the author insists on one type of courses which is still underrepresented in the field of blended courses – that of embedded systems. Consequently, we can understand that the aim of this paper is to understand, with the help of sentiment analysis, the way in which students react to blending embedded systems MOOCs into embedded system courses. We also find out that the blending variant is applied to Embedded Systems course at “Politehnica” University of Timisoara in Romania, third year of study.
As we go on with reading the paper, we discover information about relevant previous work, for example that MOOCs evaluation has been treated in literature from differents angles: time, economical, scientific points of view; or discussing evaluating systems based on facial expression. Sentiments analysis is described as good to be used for business improving, but also the author admits that in Schouten & Frasincar is presented an algorithm which deals with aspect-level sentiment analysis, which means that the sentiment is aggregated on different entities and it is present withing the analyzed text.
However, the integration of the embedded systems Massive Open Online Courses (MOOCs) in a traditional course may be done in different modalities. On the other hand, offering a sentiment analysis research of the impact that this integration has upon students is even more important than the integration itself. The paper presents the modality which is used at “Politehnica” University of Timisoara and it consists of dividing the students in two groups: the first one dealing with the traditional approach of teaching a course and the second one dealing with a platform on which messages, assisted activities, homework, etc, are posted. The goal of integrating MOOCs in traditional Embedded Systems courses consists of broadening students’ practical perception of embedded systems intricacies. More than that, it has the aim of allowing students to become aware of the MOOC technologies.
The methodology used for doing this includes the research methods, which can be divided into two major tracks: activities pertaining to the MOOCs integration into the blended course and specific methods used to obtain the results of the sentiment analysis, each of them having specific steps that have to be followed.
In result, after applying on-site and distance-learning types of integration of MOOCs into blended courses, where 72 students were enrolled in, the author states that only 57 students, which is 79,16%, chose to complete the survey. The steps which were described in the previous section regarding the research methods were applied in order to determine the sentiment analysis from the corpus collected during the students’ survey. The results show that the polarity of the corpus is positive for all three tools: Natural Language Toolkit (NLTK), Semantria and Vivekn. In order to better understand, the author also provides three figures which illustrate the Semantria results, the Twitter sentiment analysis results and the sentiment analysis on extracted themes.
Finally, we can state that this paper presents a modality in which specific Massive Open Online Courses (MOOCs) can be integrated into a blended Embedded Systems course in a non-synchronous way. The results are very good and encouraging, because the students found the integration positive.
Read more here:
Daniela Dănciulescu (University of Craiova), Mihaela Colhon (University of Craiova) and Gheorghe Grigoraș (Alexandru Ioan Cuza University of Iași) worked on a study which extends the method presented in a work of Tudor (Preda) (2010), mainly the method for formal languages generation based on labeled stratified graph representations. The authors consider the stratified graph formalism in a system of knowledge representation and reasoning. The paper offers a method that will be good to be applied for generating any Right Linear Language construction.
The paper consists of four sections. The first one is an introductory one, the second sections provides the theoretical background behind the presented study. The manner in which the language generation mechanism is designed by means of a system of knowledge representation and reasoning is presented in section three and the final section includes concluding ideas and the future study of the researchers.
The mechanism provided by the authors may generate languages of the first type and of the second one. This means that the new system for formal language generation by means of a system of knowledge is based on stratified graphs. The major part of the paper consists of exemplifying it with the help of their last work (Dănciulescu, 2015). Still, not everything is perfect, as there still are problems which does not have a solution. For example, oane of the problems is the investigation of the mannerin which the generated formal language sequences could be affected. Another problem could be the formal languages families that can be generated using this type of knowledge system: regular languages, context-sensitive language, etc. The researchers plan to work on these problems and organize studies in order to solve them.
Read more here:
BRAIN: Solving Optimization Problem via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm
The article written by Ahmet Demir from Harahalli Vocational School, Usak University, Ușak, Turkey and Utku Kose from Computer Sciences Application and Research Center, Usak University, Ușak, Turkey is a research which takes into account the importance of optimization and of solving its problems. They propose intelligent optimization techniques based on Artificial Intelligence in order to use them for optimization problems. So that they can provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions, two optimization algorithms are proposed: Vortex Optimization Algorithm (VOA) and Cognitive Development Optimization Algorithm (CoDOA).
Optimization is defined as choosing the best set of alternatives for a certain thing, but also taking some rules into consideration. It is not important just in science, but in real life, too. A lot of fields in our life function now according to different applications which use optimization. But things do not stop here. Optimization also brings certain problems with it, as it is constantly changing. Consequently, different advanced optimization solutions have been introduced in time.
Vortex Optimization Algorithm is an intelligent optimization technique which is inspired from vortex flows in nature. VOA simulates some dynamics occured in the context of vortex nature. It has more steps which have to be followed, resulting in obtaining the best values within the loop, which are the optimum solutions.
Cognitive Development Optimization Algorithm is also an intelligent optimization technique which includes simple algorithmic steps and equations. They shape a solution frame inspired from Piaget’s Theory on Cognitive Development. Initialization Phase, Socialization Phase, Maturation Phase, Rationalizing Phase and Balancing Phase are the phases on which CoDOA is grounded.
In order to help the readers understand better the way in which Optimization Problems can be solved via VOA and CoDOA, the researchers solved problems from Thomas’ Calculus 11th Edition. But, to have a more precise explanation and answer, the results obtained after using the algorithm were compared with the solutions provided in the solution manual of Thomas’ Calculus book.
Concluding, after analizing the problems, the researchers admitted that the obtained results demonstrate that the Artificial Intelligence and the techniques have an effective role on providing desired optimization results. They save time on solving complex problems, because of the power of computers and mathematically – logically improved solution steps. More than that, the authors also plan to use the techniques on solving more advanced optimization problems in mathematics and on comparing the results with other strong solution ways.
Read more here:
BRAIN: The Role of the Conceptual Invariants Regarding the Prevention of the Software Artefacts’ Obsolescence
Răzvan Bocu and Dorin Bocu from the Department of Mathematics and Computer Science, Transilvania University of Brașov, Romania, wrote a paper on the role of the conceptual invariats concerning the prevention of the artefacts’ obscolescence, emphasizing the software engineering. The aim is to understand the invariants’ role considering the continuous qualitative progress of the human artefacts, making a connection with software systems engineering. Approaching the artefacts’ obscolence is possible only through changes. They generate discomfort for other artefacts and the amplification of this discomfort beyond a certain tolerance may be called obsolescence.
There are works which underline the importance of the concept of structure during the modeling process of the systemic artefacts and the importance of the interface considering the capability of an artefact to cooperate. The structure (S) and the interface (I) are invariants and they are featured by certain stability (ST). By using these terms, the researchers want to explain which role the stability of these invariants plays regarding the obsolescence of the software systems.
Consequently, they analyze the structure, the behaviour and the change of an artefact. Firstly, the authors develop the necessity for an artefact to restructure, which has two components: the internal dynamics of the artefact and the dynamics of the artefact’s relations with the environment inside which it operates. The paper includes Figure 2, which is a representation of the evolutive model of an artefact in agreement with the exigencies of the operating environment. A better granularity, an increased adaptability, an added degree of scalability, etc, are just some of the essential benefits of an artefact’s evolution. According to this figure, it can be stated that the obsolescence of the software artefacts is unavoidable. However, people have the alternative to address the obsolescence through appropriate methods. The changes represent such a method.
So, changes regarding the evolution of the artefacts’ structure and behaviour appear, and they are of 4 essential types. The researchers tend to explain the changes as being both a necessity and a component of causal chain and, in order to help the reader understand them better, they raise some questions. Is the artefact affected by chaos? Would the artefact benefit from the re-specification of its objectives? Does the artefact require additional efficiency? Does the artefact have problems with its active interaction towards the environmental context?
According to these questions, the adaptation of the artefacts to changes may be divided into technological changes, requirements changes, new modelling paradigm, screening for hidden errors and user assessment on medium and long term. Consequently, they admit that the change may be appreciated as a modality through which an artefact adapts to new requirements in the operaing environment. Therefore, the artifacts survive only if the change exists. Considering the perspective of survival, the change is the expression of an artefact’s behaviour. There are presented two types of changes. Both of them define the artefact’s behaviour, but the difference is that the first type is founded on a given structural invariance, while the second one is associated to a certain reorganization demarche.
In conclusion, Răzvan Bocu and Dorin Bocu admit that the obsolescence of the software artefacts is unavoidable, but, in order to have the guarantee of a good longevity of the artefacts, it is useful to found the modelling of the software artefacts on conceptual invariants, together with clear and pragmatic usage principles for them.
Read more here:
Nowadays, the IT industry is in a human resources crisis. The students tend to use their laptops and phones more often than in the last decades. The tutors are more and more loaded with teaching, research and administrative tasks. Consequently, the universities should take into account the use of technologies like LMSs (Learning Managements Systems), MOOCs (Massive Open Online Courses), GLOs (Generative Learning Objects) or AGLOs (Auto-Generative Learning Objects). This paper, written by Ciprian-Bogdan Chirila from Politehnica University of Timisoara, focuses on computer science disciplines (data structures and algorithms) and shows the way in which a tutor can build several auto-generative learning objects in order to assess the knowledge of a class of students.
Section 2 of the paper presents related works in the area of learning objects. We can find works which present a generative model for teaching computer science disciplines using Lego robots, principles for designing e-learning tools dedicated to the local automotive andustry, a similar model to the AGLO approach controlled by parameters but enhanced with dynamic learning and evaluation functionalities and so on. We discover that there are a lot of original model which can be used in the area of learning objects.
in the third section we reach the presentation of the structure of AGLOs in the context of the approach. A figure, in which we can observe the AGLO meta-model or the definition (which contains sections like name, scenario, theory, questions, etc) is given. Each line is briefly described and analyzed.
Moving forward to section 4, we learn about the specification, design and implementation of learning objects in order to be used in the automatic online assessment. The paper focuses on a set of 10 tests from the area of trees and graphs used in laboratory evaluation of the student and it shows the way in which the tests can be implemented with AGLOs. Each test is different and consists of different activities, but the paper has a brief description of every single one.
The 5th section consists of a discussion on the complexity based on the number of symbols used in the design of the AGLO test battery. There are three aspects that have to be analyzed: symbols count, symbols percentages and parameters count for the creation of structures. Consequently, three tables are given. The first one show the number of symbols used in the design of the 10 tests. The second one shows the percentages of the three categories of symbols and the third one counts the parameters.
Concluding, AGLO models has a dynamic content and this is why students benefit from them. AGLO tests are reusable and the students may test individually its understanding of algorithms. Consequently, we may say that ANGLO models are good options for both students and tutors, as the tutors can also use AGLO to structure the content and modify and adapt it at two levels. For the future, the researchers plan to mltiply the first category of variables and to introduce levels of difficulty and adaptiveness.