You ask:
Why does the author only distinguish between expressions that can be employed without explaining their meaning (PRIMITIVE TERMS) and expressions that can only be employed with PRIMITIVE and DEFINED terms? Aren't there expressions sort of in the middle of those two, that is, that their meanings is explained by only using PRIMITIVE TERMS?
(The TLDR is that sentences in theories always contain both some combination of logical primitives and definitions.)
"I propose to establish a theorem belonging to logistic concerning some connexions, not widely known, which exist between the terms of this discipline... The problem of which I here offer a solution is the following: is it possible to construct a system of logistic in which the sign of equvalence is the only primitive sign (in addition of course to the quantifiers)? - Tarski, "On the Primitive Term of Logistic"
Russell and early Wittgenstein are philosophers who are representative of a system of thinking called logical atomism, and if we view Tarski's logic through the same lens, particularly in the footsteps of Hilbertian-style deduction, what we see in outline is an attempt to leverage the Fregean presumption of the Principle of Compositionality. Frege, of course, started teasing apart strings of texts long before modern computer scientists did it every day, in order to begin the project of symbolizing logic. Using a taxonomy of composite language, defined terms, and primitive terms, terms whose meaning is atomic, is a hallmark of the analytic tradition of which Tarski is a part.
Composite language derives its meaning largely from its parts. Taking the sentence "The snow is white" as an example, each word in the sentence contributes to the meaning, and the meaning changes when words are added, changed, or removed. For instance "The snow is green" has a different truth value because it predicates of snow a color that snow isn't ceteris paribus.
Definitions traditionally have no truth value because they are seen as creating a reference to other meanings to simplify much in the way a paraphrase works in language. If we say a "Bachelor is an unmarried man", then we are simply using a new term for an old concept, and the meaning of bachelor is said to be "contained in" the term. This generally applies to adjectives and nouns. Crack open a dictionary, and many nouns are defined in simpler terms.
Primitive terms are used in definitions, and defining them can often be difficult. For a logician, logical connectives are terms that are taken on intuition and it is only recently, since around the time of Boole (though not the first) that an attempt is made to define them. A modern logician can appeal to proof-semantics as a theory of logic, but men like Boole, Frege, and Tarski, were active in inventing these modern logical theories, and were in some sense feeling about for solutions to long-standing problems in philosophy of logic and language. The question of defining logical connectives was part of this project.
Primitives terms like articles and conjunctions inevitably occur with nouns and adjectives which presumably can be defined, even if left temporarily undefined. That is, even undefined terms are references that can be presumably be defined in a way that primitive terms cannot. So, if "and" is a logical primitive and "bachelor" is a defined term, the sentence, "An old bachelor and a young bachelor" is a phrase that uses both defined and primitive terms. "An and a" is a phrase that makes no sense, despite being composed of only primitive terms.
So, the taxonomy put forth by Tarski, is an oversimplification or an abstraction of how natural language works, and natural language never contains a sentence of mere logical primitives. Tarksi was providing an account of linguistic taxonomy specific to logical meaning, which as this recent answer on the difference between "but" and "and" illustrates (PhilSE), is only one "layer" of meaning in a text. And here is where we draw an important point.
Tarskian semantics, which is largely taught at the outset of logic and is a continuation of Frege's program of symbolizing logic, that is, creating a formal language, is an oversimplification of natural language and linguistics. Natural language has ontological and epistemological dimensions; it has a dimension in terms of being a generative grammar. And it has a logical dimension, and the formalisms meant to describe each aspect are all oversimplifications with their own taxonomical interests.
Tarski, following up on Frege, Russell, and Hilbert, largely developed his ideas of the semantic theory of truth in response to mathematical theorizing, and in such theories, we are largely concerned with relations between and operations on terms. Thus, it is natural to see an analogue in logical formalisms of relations and operations as being logical primitives to be studied, and terms as being that with which we can provide a definition, if not immediately, then eventually. Logical thinking mirrors this mathematical thinking, because to symbolize logic was to use mathematical methods and presumptions on previously unformalized logical thinking.
Thus, there aren't sentences in logic that are all primitives in the same way one doesn't see math statements composed of just relations and operations. You will never see "+ - = (++)" in arithmetic because it is not a well-formed formula. So, in the same vein you will not see a sentence like "^ ^ <--> !v" composed of primitives. This is not because it can't be written, but that natural language wouldn't function without atomic terms who might be associated with domains of discourse whose membership is provided for in natural language by definition.