It is considered phenomena of science, which gave birth to the problem and which the analytics (the logic positivists) tried unsuccessfully to overcome and the post positivists used them for substantiating their conclusions about relativeness of scientific cognition. The general method of substantiation, which rational science uses to substantiate its theories, is expounded. It is shown, that this method is unchangeable through changing of scientific paradigms, substantiating layers and so on. Possessing by science of this method divides it from non-science and pseudoscience and provides its especial epistemological status. On base of this method, rational explanation of above mentioned phenomena of science is given, and argumentation of the post positivists is refuted.
Does difference between science and non-science exist in principle and if it does, what is it? Does science possesses an unchangeable method of substantiation of its theories and conclusions, and if it does, what is it? To which degree can we rely on scientific knowledge? What is the origin of our concepts and are they connected with experience or are they epistemological substances, connected with experience not more than gods of Homer? If they are connected what kind of connection is it, taking into account, that science as a rule changes its concepts, conclusions and their substantiation, passing by one scientific paradigm to another (space and time absolute in the Newtons mechanics, and relative in the Einsteins one and so on)?
This kind of questions is placed in center of attention of philosophy for so long as science exists, and discussions on this subject doesnt quite down, but grow up corresponding to influence of science on life of the mankind. Its possible to divide all the philosophers dealing with this subject in two categories: absolutists and relativists. Naturally, various philosophers are absolutists or relativists of science to different degree, in different aspects and with different arguments.
As a scheme positions of absolutists can be expressed as a following: science absolutely reflects reality and does not change either its ideas (concepts) and beliefs, or substantiation of these beliefs. And that can separate science from non-science and gives science its special epistemological status. More exactly, science up today has been changing its concepts, beliefs and substantiation of them, but after it will have adopted the method of this philosopher; it will change neither concepts, nor conclusions, nor substantiation. Many methods were offered, but basically they were attempting to find out an absolute beginning of cognition and knowledge. Descartes, Kant, Fichte, Husserl and others tried to find absolutely reliable percept. Each of them had his own receipt, how to convert subjective percept of people to absolutely reliable one. (For example, Kant offered to do that by means of transcendental ego, Husserl - by means of procedures of eidetic reduction and epoche and so on.) They believed, that it would be possible to found all science on this absolutely reliable percept, and after that science would change neither its concepts, nor beliefs, nor substantiation of them. But no one of them tried to build even one concrete scientific theory on these absolutely reliable percept, because no one succeeded in the previous task: to substantiate reliability of their absolute percept. And today this direction, lets say, is out of fashion.
Another kind of adherents of absolute beginning of knowledge tried to deduce all science from some basic theory, truth ness of which (of its basic postulates) is well apparent. Peano tried to deduce all mathematics from axiomatically rebuilt arithmetic. Russell and Gilbert - the same, but from some absolutely trivial, self-evident axioms. Frege, Russell, and other analytics (named also logic positivists) tried to deduce all mathematics from logic. The same philosophers (including Carnap), being originating from fact, that rules of logic are formulated by means of words of usual language, which are, as a rule, not single meaning (meaning of them depends on context), had developed semantic and mathematical logic to guarantee single meaning of words. Like in previous case no one of them even began to draw up other sciences from mathematics. Moreover, one of the most devoted supporters of absolute beginning of science in form of system of trivial axioms, Russell, was compelled to own himself beaten on this way (1).
As to relativists of science, I shall confine myself only to their last wave, which had created relativistic attitude to science not only among the philosophers, but also in major part of society. Representatives of this wave are divided in two categories, named social (Quine, Kuhn, Feyerabend) and cognitive post positivists (Popper, Lakatos, Laudan).
The social post positivists have the most orthodox relativistic position. For example, Feyerabend claimed:
There is no any scientific method, no any simple procedure or set of rules, which forms basis of any study and guarantees that it is scientific and thereby deserves a confidence (2)
The cognitive post positivists are not so orthodox and some of them, for example Popper, declare themselves as defenders of especial epistemological status of science and pretend to give criterions dividing science from non-science. But gist of matter isnt declarations and self-determinations, but argumentation and argumentation of Popper and his pupil Lakatos contradicts to their declarations and converts them into relativists of science. For example, Popper asserts science differs non-science, since scientific hypotheses must be falsifiable (3). The last is obviously right and hypotheses like the sea is storming, because Neptune is angry, which arent testable in principle, of course are not scientific. But that, by no means doesnt save special epistemological status of science, because countless number of hypotheses, which even dont pretend to be true, but are completely falsifiable, may be proposed for any subject on study. Constructing of such hypotheses with farther falsification of them doesnt lead to truth and therefore the criterion of Popper doesnt separate science from non-science.
Also Popper claims, that although science doesnt give truth (in principle) and doesnt give substantiation (reliable and unchangeable) of its theories, not withstanding it differs from non-science, because it makes choice between theories (hypotheses), founded on criterion closeness of them to truth:
When I speak about preferable ness of theory, I mean that this theory is closer to truth, and we have reason to believe or to suppose that (4)
But what is this Poppers reason? His pupil Lakatos writes:
The Poppers critic fallibilism adopts endless regress in substantiation and in definition with all seriousness, it doesnt have illusions about stopping of those regresses In this approach grounding of knowledge is absent as upper, so under theory We never know, we only guess. Nevertheless we can turn our surmises into object of critics, to criticize and to improve them.
Stubborn skeptic nevertheless could ask else one: From what do you know, that you improve at your surmises? But now answer will be simple: I guess. There is nothing bad in endless regress of guessing (5)
So all the grounding of preferable ness of one theory upon another, after all turn out to be nothing more nor less than the same guessing. Why is it bad, I think, doesnt require explanation. So, the fallibilists (the cognitive post positivists), which considerably assisted to relativists of science (due to adopting of endless regress in substantiation and in definition with all seriousness), by no means hadnt defend especial epistemological status of it; that is still actual.
As a whole, the position of the post positivists summarized by next assertions:
1) Inevitability of endless process of grounding in science. That is one of main points of Lakatos.
2) Concepts, used by science for description of reality, are not connected with experience (are not reduced to experience). They connected only with the theory and with more fundamental theories and reduced only one to another in infinite regression. That is main point of ontological relativism of Quine, and it adopted by all the post positivists, including cognitive ones in particular by Popper and Lakatos. Taking in account, that it is central point of all post positivists conception, I shall illustrate it by quotation of Quine:
As an empiric, I continue to consider conceptual scheme of science as a tool for forecast future experience being originating from past experience. Physical objects conceptually are involving in this situation not through definition in terms of experience, but simply as no reduced substances epistemologically compared with gods of Homer (6)
3) Fallibility of scientific theories in principle. It means, that there is no difference between scientific theory and hypothesis, and any scientific theory sooner or later will be refuted. And therefore it is impossible to speak about truth ness of scientific theories - even in probabilitys sense, and only thing, about which we can speak here, is preferable ness of one theory upon another (even that without clear sense of what is preferable ness). That is main point of the fallibilism, founded by Popper and developed by Lakatos and other cognitive post positivists.
4) Impossibility, in principle, of substantiation of scientific theories, at least substantiation, which would not be refuted and changed by another after some time. This assertion connected with all previous and is joint position of all the post positivists: Quine, Kuhn, Feyerabend, Popper, Lakatos and others.
5) Incomparability between scientific theories in the same field of knowledge, impossibility of objective, rational choice between them, connected with absence of common language among representatives of different scientific paradigms. That is assertion of Kuhn.
6) Determinative influence of social factor on initial postulates and on conclusions of scientific theories - assertion of Kuhn and Feyerabend.
As indicated, affirmations of the relativists are based on certain phenomena of science. Partly I had mentioned them before. Now I want to recite all of them and to explain, how assertions of the relativists are connected with them. Here they are:
1. Multi meaning ness of words of usual language, with the help of which science expresses, formulates its beliefs and their substantiation. This fact is used by Quine as one of arguments supporting his ontological relativism. Kuhn uses it, basing impossibility of comparison between scientific theories and absent of common language between scientists. And it is fulcrum for Sepir and Worf in their linguistic relativism. As an illustration I shall quote Kuhn:
Adherents of different theories like people with different mother languages. Intercourse between them going on by means of translation, and all-known difficulties appear in it. This analogy, of course, isnt perfect, because vocabularies of two languages may be equivalent (in major part - mine). But some words in their basis, and also in theoretic vocabulary - words like star and planet, fusion and composition, force and substance - function differently. These differences arent waited, and they will be discovered and localized only by means of repeating experience with broken communication. Without further discussion on this matter I simply affirm existing of limit, to which adherents of different theories can communicate each other. This limit makes it to be hard or rather impossible for one scientist to keep at sphere of his thinking both the theories and compare them consequently each other and with nature (7)
2. Basic postulates, axioms of any theory, accepted without proof, sooner or later show themselves as conclusions, deduced from postulates of more fundamental theory. Like the differential calculus from the theory of limits, theory of limits from the set theory, the classic theory of gases from the kinetic theory and so on. This phenomenon, named by Lakatos changing of substantiating layers, is used by all the post positivists as a sufficient to conclude from it, that science doesnt have an general and unchangeable method of substantiation of its theories. (This phenomenon, also, as indicated, drove crazy analytics and they tried to eliminate it, but without success).
3. Passing by one fundamental theory to another, theories named according Kuhn paradigms and describing intersecting fields of reality, (like, for instance, the classic mechanics, the theory of relativity, the quantum mechanics and the quantum-relativistic theory), we usually change content of basic concepts. This phenomenon is named by various authors as a changing of ontology or ontological sense of concepts (Quine), changing values of variables (the same), countless regress of values (senses) (Lakatos) and so on. The classic example of this phenomenon is space and time, which are absolute in the classic mechanics of Newton and - relative in the Einsteins theory. Another example - electron, which from beginning was determined as a charged ball, after that - as the same ball, but also with mass, later - as charged cloud, dispersed on orbit circling around nucleus of atom and finally - as packet of waves. All the post positivists without exception use this phenomenon for substantiation of the above-mentioned central point of the ontological relativism about absence of connection between concepts of science and experience. Logic here is such this: if concepts would be connected with experience, they could not change their content passing by paradigm to paradigm.
4. Passing by the same changing of fundamental theories, not only basic concepts are changing, but also conclusions. So in the classic mechanics speeds are summing up according to the rule of Galilee, and in the theory of relativity - according to the formula of Lorenz. The speed of light, according to Newton, depends from speed of source of light, according to Einstein - not. And so on. This phenomenon is used by the fallibilists and first of all Popper to substantiate assertion that any theory is fallible in principle. Logic here is such: changing of fundamental theories is happened when previous theory meets its refuting experiment (the experiment of Michelson for the mechanics of Newton). A new theory, changing axioms and basic concepts (substantiating layer) of previous ones make up itself in conformity with all previous experience plus refuting experiment of previous theory. But sooner or later it also will met its own refuting experiment.
5. The obstacle, that new fundamental theory correspond to set of facts, describing by previous theory (plus non-described by previous) leads us to next phenomenon: Some concrete set of facts in field, which some theory pretends to describe, potentially countless (due to potential countless ness of virtual experiments giving new facts), but actually always limited, can be covered by conclusions of different theories based on different sets of axioms with different concepts.
From that Quine, Lakatos and others draw a conclusion, that concepts, introducing by science, are no reducing substances, epistemologically compared with gods of Homer, and axioms are convenient construct of cognition, aided to forecast future experience originating from past experience. That means, we create convenient constructions, giving logic explanation of observing at the moment things, constructions, which to truth, to reality have the same relation, as the assertion: a sea is storming, because Neptune is angry and substantiated neither more, nor less than Greek myth. And when such a model will meet refuting experiment, we build next one, having the same relation to reality, but covering bigger set of facts. (Notably that some representatives of natural sciences also share this position, and in real science we can meet theories built in accordance with this scheme). Parting science completely from reality, it was not difficult to the social post positivists to come to belief that science is depended from social factor (According to Kuhn, Einstein made space and time absolute, due to reading Marx).
Now I will explain my point of view on the problem in interest. I named my position new rationalism, opposing it as to the classic rationalism of above-mentioned absolutists of science, so and mainly to dominating up today wave of relativists of science. My new rationalism differs from the classic rationalism, because I recognize all the above-mentioned phenomena of development of science and dont try to repair or to eliminate them. But my position nevertheless is rationalistic, because I dont draw up from these phenomena, such conclusions as the post positivists do. I show that rational science possesses the general method of substantiation of its theories; unchangeable in spite of all changes of grounding layers, sets of axioms, concepts and conclusions, and it just supplies to science its especial epistemological status. This general method, opposite to the assertion of Kuhn, gives common language to scientists - representatives of different paradigms, and permits them to reach based agreement which theories to accept and which to roll out. Also I show, that concepts and axioms of scientific theory introduced according the general method of substantiation are connected with experience and in spite of their endless regress (in terms of Lakatos), that means, changing their content passing by one fundamental theory to another, by no means are not no reducing substances or convenient constructs. After all I show, that scientific theory isnt fallible in principle (the assertion of Popper) and theory substantiated according the general method will be true also after appearing its refuting experiment. The last only shows limits of the truth ness (workability) of this theory.
My point of view on the problem being considering is based on my theory of cognition (8), from which followed the general method of substantiation (9, 10, 11). In real science this method exists as stereotype of natural-scientific conscience, like the grammar of usual language exists in it before it (grammar) will be described. The method appeared gradually in process of the evolution of the natural science and had largely taken shape in the classic mechanics of Newton-Lagrange. For, as indicated, this method up today still doesnt expressed in explicit and exists only as a stereotype of natural scientific conscience, even today its norms from time to time are violated on practice, what as it will be showed below, leads, inevitably, to contradictions and paradoxes in future. Like any scientific theory is idealization of reality, described by it, also this method of substantiation is idealization of real practice of substantiation in science, and therefore it isnt realized on practice with absolute strictness.
The general method of substantiation consists of next three points: 1) Introducing, building of concepts. Simultaneously fundamental postulate (axioms), concerning these concepts, are formulating. 2) Building conclusions of the theory, being originating from the postulates. 3) Verification of the conclusions.
Lets begin from concepts. Relation between our concepts and reality, which they describe, is a corner stone of my theory of cognition and basis to answer all the questions at the beginning of the article. That is why I want to consider it here intently. First of all, opposite to ideas of the analytics and the post positivists, which concentrated on words and tried, or to achieve single meaning of them, or to prove impossibility of that and to deduce from that all kind of relativity of knowledge, not words but concepts are fundamental elements of cognition. But concepts, although they are expressed in science usually (but not necessarily) by means of words, as I will show, may be determined single meaning. That is why in vain the analytics tried to achieve single meaning of words and the conclusions of the relativists, based on absent of this single meaning are wrong.
Why concepts, not words, are fundamental elements of cognition? Because, as historically so morphologically, concepts appeared before language. In morphologic development child occupies concepts before he knows to speak. Baby be born doesnt have either words or concepts. And when he opens his eyes, he doesnt yet differ objects, which he sees, but accepts surrounding as a brilliant color mosaic. He doesn't realize even that he sees various colors. He only receives different perceptions from different colors and the same perceptions from the same colors. On base of similar and different visual perceptions, with help of perceptions from other receptors (for example tactile ones), accompanied by moving activity, he forms in mind, firstly subconsciously, and after that consciously, but not yet on linguistic level, the primordial concepts of objects and phenomena, which Piajet (12) named image-etalons. They are really concepts, because with help of them (comparing with them), child makes identification of new objects. In that manner, he without knowing words differs an apple from a ball with the same dimensions and color. Therefore he already has the concept of apple and with help of it he separates the set of apples from multitude of other objects. By analogy it is easy to see that high animals also possess some concepts (in form of image-etalons). From that it is followed, that historically, during the evolution, predecessors of the human been had possessed concepts before language appeared. But even today one forms concepts (if he doesnt receive them from others, but forms them himself) firstly not only without words, but also subconsciously. Even scientist firstly subconsciously feels some common property in objects or phenomena in field of his interest, after that he recognizes this property, and only after that he finds words to define this property. When words are only means to transfer information in general and concepts in particular, from one to others. The means, which are very efficient, universal, but still not only possible and even not always the most efficient. Drafts, schemes, formulas and algorithms are much more efficient, each in its field, than words. But since language played so exceptional role in evolution of the human been and its cognition, and deluge of words in our time of mass-media simply pours over individual, fetishisation of language is happened in philosophy not only of the positivists and the post positivists, but in such branches as the hermeneutics, the communicative philosophy, the theory of discourse and so on. In the same time science, first of all the natural science due to its origin never fetishisated language, but with more or less degree of conscience always operated with concepts, not with words. More of that, science, overcoming no single meaning of words (with help of which it up today usually formulated definitions of its concepts), aids to single meaning of concepts and achieves it practically, using the general methods of substantiation (how it does that, will be clear further).
Lets consider evolution of concepts in science or more precisely in cognition as at scientific, so at pre-scientific stage. Because also at pre-scientific stage the evolution developed in such a way, that concepts become more precise (single meaning). That is expected, if we take into account that evolution of cognition is a part of the general evolution of human been. Cognition is form of adaptation to environment and the richer cognition (knowledge) of some community is, the more this community is adapted to environment. But human community accumulates knowledge by means of transfer of information from one to others. The less the transfer is accurate, the less useful it is. But if determination of concepts isnt clear, one, who receives information, can understand it differently from one, who transfers it. This obstacle aims the evolution of cognition in direction of grows of strictness of concepts definition.
This evolution was going in such a way. At linguistic stage the words labels: fire, water, tree and so on were hanged on convenient concepts, existing already in conscience of people in form of image-etalons. During time due to communication word denominations received more strictness, comparing with image-etalons, but still not completely. For example, one can consider concrete plant as a tree, and another - as bushes, and so on. At early science stage further growth of strictness of concepts goes on due to adding enumeration of properties (qualities) of objects to words-denominations. For instance: deer is animal, mammal, herbivorous and so on. But even that didnt supply completely single meaning ness of concepts. The last was achieved only with adopting the general method of substantiations, in which to determine concept - means to enumerate its properties, to introduce measure of each from these properties and to indicate exactly quantity all of these properties. For example, ideal liquid we determine as an incompressible and absolutely fluid liquid. That means we attach to the concept ideal liquid properties of press ability and fluidity, give them measure and establish quantity of press ability - 0 and fluidity - *.
Such method of single meaning determination I call nominal definition. It is not only possible single meaning definition. For instance, the axiomatic method of definition also is single meaning one and, by the way, they are following one from another. For example, the phrase from nominal definition: ideal liquid is incompressible one equivalent to axiom: ideal liquid doesnt change its volume under pressure. But single meaning ness of nominal or axiomatic (or any other) definition doesnt mean yet single meaning ness of connection between this definition and multitude of objects or phenomena, to which we relate this concept. More exactly there are no objects in reality, which exactly correspond to nominal or other kind single meaning definition. Suppose we define straight line as a curve with curvature constant equal to 0. To such a definition no one concrete object of reality corresponds absolutely. For example, rays of light distort in field of gravitation of big masses but since field of gravitation is unequal in vicinity of any point of space, rays, absolutely complying with the above definition, are absent. To make reference of concepts definitions nonempty, and to determine single meaning multitude objects of reality, to which we relate our concept, we must introduce admissible deviations of objects of reality from nominal definition upon measure of property, lying in base of this definition. For example, according to this approach, all the lines, maximal curvature of which doesnt exceed some value, match the definition of straight line.
Like, as the whole general method of substantiation is idealization of real substantiating practice in rational science, so the described above method of concepts definition is idealization of the real practice of definition in science. In real science not always we meet nominal or axiomatic definitions of concepts, but imperative of science - the demand of single meaning ness of definition in possible measure, is obvious. Also not always we see admissible deviation in science practice, but reason for that is deviation of objects from nominal definition in such sciences like physics are usually negligible.
All that still doesnt drain topic of connection between concepts and reality, but we shall return to it further.
The next point of the general method of substantiation is building of conclusions. Single meaning ness of concepts still doesnt guarantee single meaning ness of scientific theory as a whole and therefore it doesnt guarantee existing of common language between scientists - representatives of different paradigms. It is also necessary single meaning ness of conclusions and keeping, retaining of sense of concepts during building theory. The last isnt guaranteed a priori. For example, it is not difficult to notice that meaning of the concept freedom in the Marxism is not the same when it is dealing with freedom in the capitalist and the socialist societies, although Marx uses the same word-denomination freedom in both the cases.
As to single meaning ness of conclusions it is easy to see difference in this aspect between physics, which is an etalon of rational science, and, let say, astrology, which also pretends to be science, but it is not. All engineers, using physics formulas to solve concrete task, will receive the same result. In the same time astrologic forecasts given by various astrologists in the same case will differ each other. Single meaning ness of conclusions and retaining sense of concepts are provided and guaranteed only by using the general method of substantiation.
According to the general method, building theory, deducing conclusions from initial postulates, must be axiomatically. Since in axiomatic theory set of axioms determines all the conclusions of this theory, independently from concrete way (succession) of conclusions building. Also axiomatic way of constructing theory keeps initial sense of concept. Because in axiomatic theory set of axioms not only determines completely concepts but also in such a theory we can substitute axioms by some conclusions and such rebuilt theory preserves all the conclusions unchanged. (As it is known, we can change the fifth axiom of the planimetry of Euclid with the theorem about sum of angles of a triangle and to retain all other theorems. Also we can substitute the second law of Newton by the conclusion of his theory about quantity of movement with the same result. And so on. But if conclusions can become axioms and axioms determines concepts including their sense, therefore sense of concepts stay unchangeable from beginning till end of the theory.
I must accentuate, that no other method of conclusions' constructing, for example, genetic (or constructive) one, does possess this quality.
It is pertinent to mark here, that not all the scientific theories up today have been built axiomatically, especially pure axiomatically. And secondly, there are philosophers, which affirm impossibility in principle of axiomatic rebuilding any rich enough scientific theory.
As concerns the first I must accentuate, that the general method of substantiation is used at the stage of substantiation. But this stage is preceded by stage of genesis of theory, in which not only the genetic method of constructing of conclusions, but even pure intuition, fantasy and other things, which blend science with art are permitted and useful. When substantiation (the general method of substantiation) separate science from art and non-science generally. But on practice stages of genesis and substantiation arent divided strictly. Also, remember, that the general method of substantiation is an idealization of real practice of substantiation in science. Pure axiomatic constructing not always has place even in phase of substantiation, but in these cases no formal axiomatic, namely deductive expanding of theory has place in rational science.
As for impossibility, in principle, of axiomatic rebuilding of theory I must say, that these assertions are based or on unclear distinguishing of genetic and substantiating stages of theory building or relate to formal logic theories, which in difference from scientific theories (in meaning used at this article, for example, natural science theories) dont pretend to describe some field of reality and conclusions of which relate to variable predicates, which may have a field of feasibility and may have not one. That is why the general method of substantiation doesnt spread on such the theories. But the problem of absoluteness-relative ness of science discussed in this article doesnt relate to them.
As example of works in which impossibility of axiomatic rebuilding of any theory is affirmed, I can give the book of Styopin (13). He considers concrete theories, in which dividing of genetic and substantiating phases isnt accomplished, for example, the geometry of Euclid as proof of impossibility of pure axiomatic constructing of theory. But he forgets, that Gilbert had accomplished axiomatically the geometry of Euclid. All his other examples are also from genesis of theories - subject, to which his book is devoted, and therefore using of genetic and axiomatic methods in combination in them is completely acceptable, but that doesnt prove, what he wants to prove.
It is pertinent here to elucidate whats difference between the genetic (constructive) and the axiomatic methods. Basic element of the axiomatic method is a concept. Basic element of the genetic method is described by Styopin abstract object. Concept fixes only those properties of objects, which determined by axioms. That is at using concept an appeal to properties of object, which are not enumerated in the definition, is forbidden and, if we do it, we use other concept and receive other theory with different conclusions concerning different field of reality. Introducing and consideration of new and new properties of the same abstract object is substantial point of genetic method and are carried out during so cold mental experiment. It is very valuable heuristics property at genetic stage, but it destroys deductive ness of constructing and single meaning ness of conclusions and therefore must be eliminated at stage of substantiation. Due to implicitness of the general methods of substantiation in science up today this demand from time to time is violated, what leads sooner or later to contradictions and paradoxes. That will be shown further on examples.
Introducing concepts according to the general method of substantiation and using axiomatic (deductive) construction of conclusions provides single meaning ness of theories and therefore common language for scientists (in opposite to the assertion of the relativists). But single meaning ness of conclusions doesnt mean yet their truth ness, and from single meaning ness of concepts correspondence of them to reality, connection to experience, reflecting by them ontological substance still doesnt follow. It is possible to complete the theory: a sea is storming, because Neptune is angry by determining single meaning the concept Neptune and by indicating reasons of his anger, and in such a way to provide single meaning ness of conclusions of this theory. Which relations to truth will all this have, doesnt require explanation. Therefore the assertions of the ontological relativism still are not refuted.
In opposite to the ontological relativism I claim that concepts and axioms of theory may be connected with experience and that it is a demand of the general method of substantiation. Like in case with definition of concepts according the general method of substantiation and with axiomatic constructing of theory, this demand also sometimes is violated in real practice of science and that leads to penetration in to science all kinds of Neptunes and phlogistons. My argumentation is such:
Primordial, pre-science and pre-linguistic concepts, image-etalons are connected with experience due to their origin. They cannot be expressed through another concepts, because there is no language at this stage. They cannot be connected to any theory - there are no theories still. As was shown they are origin only from sensual percept. But that doesnt guarantee such connection at scientific stage, especially if we shall take into account the above mentioned phenomena of science, which the post positivists build their assertions on. Bellow I shall give a few examples from physics, which will illustrate my assertion that, normally, connection of concepts (and axioms) with experience takes place, and in cases, when this norm is violated, when science betrays its method, convenient constructs of cognition, phlogistons are appearing.
Lets consider first the wave theory of light. How have it appeared historically? There were experiments with diffraction and interference. Lets restrict ourselves for simplicity only by interference. Is it possible in accordance with the general method of substantiation on base of only visible interference to determine light as a wave phenomenon? No, it isnt. Because interference showed only, that light possesses the property to bend around obstacles on its way. When under waves we understand not only the property to bent around obstacles, but also properties of periodicity, of frequency, of amplitude, of phase, property of summing with resonance and so on. The fact, that light similarly to mechanical waves bends round obstacles doesnt mean yet, that it possesses all others properties of mechanical waves. Particularly it isnt obliged (still) to possess periodicity, sinusoidality and so on. What does is not obliged means? It means that properties of periodicity, sinusoidality and others still are hypothetical, by no way dont connected still to experience according the general methods of substantiation light still isnt waves. But the theory of light at period of discovering of interference was at genetic stage, not at stage of substantiation. At this stage it is justified to make genetic supposition, that light is waves with all their properties. Such a supposition has might heuristic capacity, because aims new experiments to test possessing by light other waves properties. And such experiments are realizing. And it appears that light has length of waves, and frequency, and phase and so on. Only after that it is possible to consider light as a wave phenomenon, understanding by that concrete set of wave properties appeared already in all these experiments. But even now it is forbidden to call light simply waves. Because we can not guarantee, that in such a way we will not attribute to light some properties, which usual, known and studied waves possess, but light doesnt possess. And really, as we know, light is not simply waves, but it possess some waves and some corpuscles properties. And what is important, it doesnt possess the property of continuity, which we attribute to usual waves, and which didnt appear in any experiments with light. But adherents of the wave theory of light, which didnt know the general method of substantiation, thought, light not only possesses concrete waves properties, but it is simply waves with all their properties including continuity. And that is why they thought that contradiction between the wave and the corpuscle theories of light is insurmountable. Continuity of lights waves, if it would be, contradicts to corpuscular nature of light. But because lights waves dont possess continuity, there is no contradiction. Stealing non-evidently in the wave theory the idea about continuity of lights waves breaks rules of the general method of substantiation, because property of continuity, which didnt appear in any experiment and therefore disconnected with experiment, is ascribed to the concept light. And as we see, it leads to contradiction.
The second example is history of definition of mass by Newton in his mechanics. Although Newton had played exceptional role in creating of the general method, but at his times the method was not completed yet even on level of model, and what is more, it was not described in explicit form. That is why Newton defined mass as quantity of corpuscles although, naturally, he didnt count any corpuscles in any objects. It was violation of the general method of substantiation and at the same time it contradicts to the second law of Newton, from which follows the definition of mass as measures of proportionality between force and acceleration. This contradiction was discovered by Eiler, and after that we know only the last definition of mass.
Next example - the history with paradox of Landau-Peierls and its solution by Bohr and Rosenfeld. The paradox appeared at building the quantum-relativistic theory of electromagnetic field. Equations of the classic theory of electromagnetic field (equations of Maxwell) relating to E and H (electric and magnetic potentials) couldnt be abolished also in the quantum-relativistic theory. But in the classic theory E and H were continued functions determined at each point of space, what, due to quantum ability of this field, couldnt take place at quantum-relativistic describing of it. Bohr and Rosenfeld had solved this problem, determining variable E and H in Maxwells equations in the quantum-relativistic theory as new concepts. Of course, they were the same potentials with all their properties except one: now they were not determined in each point of space, but only in some vicinity of point. In the vicinity, in which E and H obtained sense as integral characteristic of micro processes of disintegrating and appearing particles, and could be, in principle, measured. (Like pressure and temperature of gas are integral characteristics of molecules moving process and have sense only in volume of vicinity, larger in a few times dimensions of molecule). In such a way concepts of new theory anew were connected to experience. Here the questions: where property of continuity of function E and H in the classic theory have appeared from, is aroused. It is easy to see, it was violation of norms of the general method of substantiation, because no experiment in infinitely small vicinity of point we cant realize in principle and therefore, we cant observe any identity of properties in that vicinity.
But it is not enough for rational explanation of above-mentioned phenomena of science only to establish the fact in itself of connection concepts with experience. It is necessary also to clear character of this connection, permitting to describe the same ontological substance by various concepts (with various properties) and to receive the same conclusions from different sets of axioms, not converting them into convenient constructs of cognition. For that lets consider from point of view of connection to experience, what happens with closed concepts through substitution one fundamental theory by another. For example, lets take the transition from the Newtons mechanics to the Einsteins theory of relativity. At first glance it may appear, that such a transition refutes any possible attempt to tie concepts and their properties to experience. Really, close concepts of Newton and Einstein (time, space and so on) possess simply different properties. The last would mean only, that in each case we choose for definition from countless number of properties of this objects different properties (Like in the classic model of gases we chose for definition such properties as temperature and pressure and in the kinetic theory - speed of molecules and their quantity per volume). It is normal from any point of view and doesnt contradict to connection with experience. But in cases like the transition Newton - Einstein these concepts, seemingly, possess contradicting properties. At first glance it seems that time cant be simultaneously absolute and relative. And therefore as time of Newton, so time of Einstein (or at least one of them) doesnt tie to experience.
But really they both are tied to experience, really for all that one object time can possess boss the properties: absoluteness and relativity simultaneously. I shall begin explanation from analogy. Lets imagine that we have a piece of sickle and we try to describe geometry of its edge. Our receptors (eyes) send to mind signals, which automatically compared in it with image-etalons and our mind gives us the percept: it is similar to circle. But we dont trust the simple visual percept and measure curvature of edge in various points with help of instruments. It shows itself not exactly constant, but for all that very close to that. Remembering, that real objects may correspond to our concepts (to nominal definition of them) only due to admissible deviations from them, we conclude, that the form of sickles edge is circle. After that we found whole sickle, made on the same plant, according to the same standard (that means belonging to the multitude of objects, defined by our concept sickle) and convince ourselves that its form is not a circle but some other curve, suppose, parabola. Does it mean that our previous definition was wrong, gave us faulty or, in terms of Quine, relative ontology, was tied not to experience but to theory? No, it means only, that ontology and theory as a whole relates to some concrete field of reality, which (and only which) this theory, describes. Not ontology is tied to theory but both: the theory and the ontology tied to described reality. What does happen, when we switch to reality, which include previous one or is an expansion of it (Einstein for Newton), or is deepening of it (micro world for macro)?
It would be wanted to say, that in this case new theory is generalization of previous one and new ontology - generalization of previous ontology. And previous theory and ontology are peculiar case of new one, and even may be obtained from new theory and ontology by means of the limits transformation, as that happens in the case of Newton and Einstein (Einsteins time is expressed through Newtons one by formula , where v - speed of inertial system, and c - speed of light. When v rushes to 0, Einsteins time becomes Newtons one, and for v - small relative to c, they coincide practically). But the story with Newton and Einstein is only peculiar case of including fields of reality each by other. The example with sickles nevertheless its conditionality and primitiveness describes more universally the character of interrelationship of such theories, more exactly, interrelationship of ontology in such cases. To understand that better, one shall remember, that in axiomatically constructed theory concepts are determined by axioms. Axioms are nothing more, nor less than assertions about relations between objects, which we are describing in concepts. When relations generally speaking, are functions (for example according to low-postulate of Ampere, electric current is proportional to electric poetical and inversely proportional to resistance). It is possible to express these functions by formulas or by graphs. Graphs led us to nerve of the problem. Because axioms-postulates appear at stage of substantiation of theory and substantiation is preceded by the realization of experiments at stage of genesis. Result of each experiment gives us point on a graph. With help of these points we build at curve on graph, which afterwards will be converted into axiom. Graph is our sensitive percept of property (relation), axiom (formula) - its definition in theory. But we know, experimental points on such graphs never lay exactly on these straight and curve lines, which are converted afterwards in formulas-postulates. Our formulas-postulates, our concepts with their properties, ontology, which science gives us - are only approximation of really existing properties of nature. Approximation isnt, indeed, absolute reflection of natural properties, but it isnt the ontological relativity, which Quine speaks about. I should say that approximation by no means is the ontological relativity. It is simply not complete accuracy, limited exactness of science. But approximation explains essence of relations between ontology in such the theories, like Newtons and Einsteins, in common case. Just when our experience limited by some part of reality (a bit of sickle), more rough approximation of real property satisfies us, providing enough exactness (circle). When we receive access to wider part of reality (whole circle or bigger piece of it), this rough approximation becomes not enough exact, the new approximation, providing enough exactness not only in the previous part of reality but in new, wider one, is required. As it appears in the case with sickle, qualitatively approximations arent obliged to be converted one into another through the limit transformation. Nevertheless they both will reflect with some exactness the property of real objects, that means the real ontology, but one will be do it in smaller part of reality, and another - in wider, including the previous one. And each of them will be tied to reality by means of experience, (through points on a chart).
I suspect, that even after this explanation some psychological discomfort is still remained at the reader: some approximation against the obvious ontological contradiction between absoluteness and relativity of time. But lets remember that absolute and relative - are only words of language, which arent exact and have many meanings, in principle. When cognition is build not on words, but on concepts. In language absolute and relative have very wide and vague set of meanings and for many people relative associates with voluntary. But absoluteness of time according to Newton has the concrete functional sense: t1 = t2 - time flows equally in different systems. And the same strictness and functionality has the Einsteins relative time, but in this case function is another:
(This Einsteins relativity, as we see, has no common with relativity of usual conversation according to which all is relative). These are different functions, but like circle and parabola in the case with sickle these two functions give equally acceptable approximation of the real ontology (properties) of time in sphere of workability of the Newtons model. Beyond these boundaries only the Einsteins model gives acceptable approximation. And it is clear, that sooner or later we will meet an experience (according to astrophysics data we already met), which will require repairing also the Einsteins approximation of time. In other words real time appears to be both absolute and relative simultaneously. More exactly it flows (in inertial system) according some law, which in sphere workability of the Newtons model is approximated good enough by Newtons absolute time and in wider sphere - by Einsteins relative one.
The connection of concepts (and axioms) with experience refutes the ontological relativism, and the assertion of the social post positivists about influence of social factor on initial premises and conclusions of scientific theories. Einstein couldnt make time relative under anybody influence, if he wouldnt tie his relativity to experience.
After all, approximative character of connection between concepts and experience refutes also Poppers fallibility. Not entering in requirements of the general method, concerning conclusions verification, and connected with it correction of truth nesss sense, it is possible only on base of above consideration to say, that not only concepts (their properties) are approximations of real ontology, but also scientific theory as a whole, including its conclusions, is approximation of described reality. And as any approximation it is true (workable) only in concrete limited sphere. So-called refuting experiment is not evidence of fallibility of correspondent theory and doesnt abolish its truth ness, but only establishes limits of field, in which this theory gives acceptable approximation. Not only previous theory remains true, but also its substantiation. As a new theory doesnt cancel previous one, so new substantiation doesnt cancel previous one. As a new, so an old substantiation is made according the same method, namely according the general method of substantiation. In one case concepts are tied to one data set, and in other case - to another (although these sets intersects, more exactly new set includes the previous one as a part).
Here the relativists can object: if we dont know limits of truth ness of our theory before the refuting experiment whether it is equal to relativity of truth ness of this theory. But it is not correct, because even if we dont know exact limits of workability of our theories, we know limits of some decreased field, in which this truth ness (workability) is guaranteed. These are the limits of already existing experience. If, lets say, at moment of creation of the classic mechanics the mankind already had an experience with speeds up to 20 km/sec., and the Newtons absolute approximation of time was tied to this experience, all Newtons mechanics was accepted true and giving acceptable approximation of reality in the field determined by existing experience, namely for speeds smaller that 20 km/sec. All that in condition that this mechanics was built according to general method of substantiation (axiomatically and so on), and all so-called normal conditions were preserved (for example, annihilation of substance had occurred, that means normal condition didnt preserved and the Newtons theory isnt workable in this situation). At the same time non-science, lets say astrology, even without the annihilation of substance, cant guarantee truth ness of its conclusions in any limits.