Workshop on Philosophy of Logic  (MCMP - University of Buenos Aires) 
 Buenos Aires, 26 y 27 de septiembre, 2013 
SADAF - Bulnes 642  
 Programa
 Jueves 26 de septiembre
 Coordinador: Eduardo Barrio 
15.00 hs. - Johannes Stern (MCMP) “Axiomatizing Semantic Theories of Truth?”  
 
16.00 hs. - Ramiro Caso (CONICET - UBA) “Lessons learned from the philosophy of logic: Absolute generality and natural language semantics”  
Café 
Coordinador: Lucas Rosenblatt 
18.00 hs. - Thomas Schindler (MCMP) "Reference Graphs, Dependency Games and Paradox" 
19.00 hs. - Natalia Buacar (UBA) “Philosophical grounding of deduction”  
Viernes 27 de septiembre 
Coordinador: Federico Pailos 
15.00 hs. - Catrin Campbell-Moore (MCMP) “A Kripkean fixed-point style semantics for credence”  
16.00 hs. -  Damian Szmuc (UBA) “On Pathological Truths”  
Café 
Coordinadora:  Lavinia Picollo  
18.00 hs. - Paula Teijeiro (UBA) “Sorites sequences and the continuum”  
19.00 hs. - Eleonora Cresto (CONICET) “Group Knowledge, Evidential Probabilities and Responsibility”
Organizado por Eduardo Barrio 
Este evento cuenta con el apoyo del DAAD Projekt (UBA - MCMP) 2013-2014 ID 56133156: Truth, Paradoxes and Modalities. Directores: Eduardo Barrio - Hannes Leitgeb.
 
	   
	  	  	  
	  
	  	
Abstracts: 
Johannes Stern (MCMP) “Axiomatizing Semantic  Theories of Truth?”
We discuss the interplay between the axiomatic and  the semantic approach to truth. Often, semantic constructions have guided the  development of axiomatic theories and certain axiomatic theories have been  claimed to capture a semantic construction. We ask under which conditions an  axiomatic theory captures the semantic construction. After discussing some  potential criteria, we focus on ω-categoricity as a criterion and discuss its  usefulness and limits.
Ramiro Caso (Conicet - UBA) “Lessons learned from  the philosophy of logic: Absolute generality and natural language  semantics”
Providing a suitable semantic account of expressions  of generality in natural language is not without its problems. Inquiry has  focused mainly on quantifier domain restriction, taking for granted the  possibility of employing usual model-theoretic methods to interpret quantifiers  in natural language constructions. In this paper, I focus on the problem of  absolute generality and its consequences for natural language semantics. I  suggest that considerations of indefinite extensibility may make a  model-theoretic understanding of quantifiers impossible, and argue that, were  this so, usual model-theoretic methods could not be used to deal with the  interpretation of quantified sentences in natural language. I explore possible  ways of dealing with expressions of generality in natural language.
Thomas Schindler (MCMP)   "Reference Graphs,  Dependency Games and Paradox":
The idea that the paradoxicality of a sentence can  be tied down to certain pathological patterns of reference adherent to the  sentence is more or less ubiquitous throughout the literature on semantic  paradoxes. However, no comprehensive account has been given so far, providing  satisfactory answers to the questions (i) what patterns exactly should count as  pathological and (ii) how these patterns get asssociated to the sentences of our  language. We will provide a game theoretic semantics for Kripke’s theory of  truth, treating various valuation schemes in a uniform manner, such that  Kripke-paradoxical sentences can be characterized by properties of the  game-strategies available for the player who aims to show that a sentence has a  definite truth value in some Kripke fixed point. Morover, such strategies can be  interpreted as (decorated) reference-graphs of the sentence in question. In this  way a framework for a graph theoretic analysis of the Kripke-paradoxical  sentences is provided. We will argue that, when valuation schemes stronger than  Weak Kleene are considered, there are certain sentences for which no canonical  reference-graph can be assigned to: the notion of a single reference-graph  -applicable to Weak Kleene- must be replaced by that of a systems of  reference-graphs. Nevertheless, all necessary resp. sufficient conditions we  provide for a sentence' Kripke-paradoxicality - given in terms of  graph-theoretic properties of its canonical reference-graph in the case of Weak  Kleene - are exactly the same for all other valuation schemes, with the sole  difference that now all members of the whole system of graphs have to be taken  into account.
Natalia Buacar (UBA) “Philosophical grounding of  deduction”
Agreement is not a widespread phenomenon among  philosophers. In this sense, the confidence in deduction can be considerate  atypical. To such an extent that even today many argue that deduction does not  require justification. However, since the challenge posed by Lewis Carroll  (1895) many philosophers of logic have taken it up. On its traditional  formulation, the problem of the justification of deduction is equivalent to  dealing with the circularity - apparently - inevitably involved in any attempt  to justify that the deductive rules preserve truth, i.e. are valid. Answers to  it point out that there is not vicious circularity involved or try to find some  kind of external guarantee to break the circle. In this work I offer reasons to  doubt of the correctness of the above formulation. I suggest there is a  misconception on the normativity of logic underlying to it. I propose there is  an interesting problem around deduction that is different from the former. I  argue that that problem is of philosophical nature, as any of its answer.  Finally, I outline a reply, which among other things, picks up the  inferentialist program as presented in Inferential Role Semantics and discussed  by Jaroslav Peregrin (draft). The first stress the importance of  dispositions in determining which rules are meaning constitutive of expressions.  The second insists also in the "normative attitudes" of speakers, the  corrections they make. After commenting the notions of rule and meaning constitutivity, I highlight the importance of the so common  situation of teaching deduction and its very possibility in such a  determination.
Catrin Campbell-Moore (MCMP) “A Kripkean fixed-point  style semantics for credence”
We provide a theory which allows us to reason about  the notion of credence, in particular this allows for the formalisation of  higher order credences. We argue that the best way is to conceive of credence is  as a predicate, which then allows for the derivation of the diagonal lemma. We  develop a semantics based on a possible-world style structure. This semantics  generalises Kripke's construction from "An Outline of a Theory of Truth", and  works along the lines of Halbach and Welch's "Necessities and Necessary Truths".  We also give some axioms corresponding to the construction.
Damian Szmuc (UBA) “On Pathological  Truths”
In Kripke’s classic paper on truth it is argued that  by weakening classical logic it is possible to have a language with its own  truth predicate. Usually it is claimed that a substantial problem with this  approach is that it lacks the expressive resources to characterize those  sentences which are pathological. The goal of this paper is to offer a  refinement of Kripke’s approach in which this difficulty does not arise. We  tackle the characterization problem by introducing a pathologicality operator  into the language, and we propose a peculiar fixed-point semantics for the  resulting theory in order to establish its consistency.
Paula Teijeiro (UBA) “Sorites sequences and the  continuum”
The Sorites paradox challenges the adequacy of  classical logic in dealing with vague predicates. One of the strongest  alternatives is to employ fuzzy sets in order to avoid positing the existence of  sharp boundaries between the extension and the anti extension of such  predicates. Nevertheless, many authors, like Sainsbury, claim that no kind of  set, fuzzy or otherwise, can capture the desired interpretation, and formal  semantics for vague predicates has to be given up altogether. The idea is that  the continuity and the imprecision cannot be captured by a mathematical model.  But the relation between vagueness and continuity is not so simple: it is taken  for granted by many fuzzy theorists, and it has been dismissed as trivial or  inadequate by others. The aim of this project is to understand how discreteness  and continuity interact to give rise to vague predicates. John Bell has been  working on the topic of the continuum with regard to the role that  infinitesimals have played in the history of mathematics and philosophy.  According to him, the development of nonstandard and nonclassical analysis has  provided a mathematically precise understanding of the “true” continuum. We will  explore the possibility of framing the semantics for vague predicates in this  kind of non classical setting, in a way that answers some of the objections to  traditional fuzzy approaches.
 
Eleonora Cresto (Conicet)  “Group Knowledge, Evidential  Probabilities and Responsibility”
 In previous work I developed a framework that allows  us to attribute both knowledge and higher-order evidential probabilities to  single agents. The model validates a moderate version of the so-called KK  principle of epistemic logic without actually requiring that the underlying  accessibility relations be transitive; I contended that moderate epistemic transparency so-conceived  can be seen as a request of ideal epistemic responsibility. In this opportunity  I seek to extend the previous analysis to collective agents. This involves several conceptual and  technical challenges. I propose to  model paradigmatic examples of group knowledge by means of the technical notions  of common and  distributed knowledge, as well as of various intermediate concepts; I also  suggest a way to reconcile several conflicting intuitions by appealing to a  dynamic framework. Among other things,  ‘public announcement’ phenomena are here reinterpreted as requirements of ideal  individual responsibility; an ideally responsible individual will seek to  embrace the knowledge of the group and make it her own. Finally, I also explain how to deal  with evidential probabilities for the group (qua group). First-level  evidential group probabilities (in a particular world) are rendered by sets of  individual measures conditional on the strongest proposition known by the group  at that world. Second- and higher-level group probabilities, on the other hand,  turn out to be trivial in the model, much unlike higher-order levels of  individual evidential measures. I end by discussing the philosophical  significance of this result.