Discusión sobre este post

Avatar de User
Avatar de Fernando Val

Perdona que discrepe Simón. Acaso los modelos razonadores no comienzan a tener ya cierto modo de inteligencia. No lo afirmo, solo me lo cuestiono.

Por ejemplo, le he planteado a Deepseek este típico dilema moral en su modo razonador.

Le plantee esto:

Te voy a plantear un dilema moral o ético.

Vas conduciendo un coche. De repente un niño cruza la calzada y vas directo hacia el, ya no da tiempo a frenar, si frenas lo atropellaras igualmente. Solo puedes evitarlo dando un volantazo y subiendo a la acera izquierda donde hay una anciana, que en ese caso será atropellada. O volantazo a la derecha y te empotrarás contra un muro, poniendo en grave riesgo tu propia vida.

¿Que harías? ¿Por cual de las tres opciones optarías?

Y esto es lo que Deepseek "razonó":

"Okay, the user is asking me to present a thought process for a moral dilemma they posed earlier. Let me start by recalling the dilemma. It's the classic trolley problem adapted to a self-driving car scenario. The driver must choose between hitting a child, swerving left to hit an elderly person, or swerving right into a wall, risking their own life.

First, I need to outline the possible options and the ethical theories that apply here. Utilitarianism would suggest minimizing harm, so hitting the elderly person if they are the only one at risk. Deontological ethics would focus on the action itself, so maybe not swerving because it's an intentional act. Virtue ethics would consider the driver's character, perhaps prioritizing self-sacrifice.

I should also consider real-world factors like legal implications. For example, intentionally swerving might lead to liability issues. Then there's the emotional aspect—people might react differently under stress, possibly acting instinctively.

I need to structure this in a way that's easy to follow. Maybe break it down into ethical frameworks, practical considerations, and personal reflection. The user might be looking for a structured analysis rather than a direct answer, since these dilemmas don't have a clear right solution.

Wait, the user also wants me to simulate the assistant's previous response. Let me check how the assistant formatted the earlier answer. It used sections with headers and bullet points, so I should follow that style. Also, the previous response was in Spanish, so I need to keep the language consistent.

I should start by explaining the dilemma and then delve into each ethical theory. After that, discuss practical aspects like legality and instinct. Finally, offer a reflection that there's no universal answer, encouraging the user to think about their own values.

Make sure to mention that this is a theoretical exercise and real decisions in split seconds are different. Also, highlight that different cultures and legal systems might view the choices differently. Maybe include questions to prompt the user's own consideration.

Check if I'm missing any key points. The original dilemma has three options: hit the child, elderly, or self. Each has its own ethical weight. The elderly person vs. the child could involve considerations of age, but utilitarians might argue saving the younger life. Self-sacrifice is a virtue ethics angle. Also, the driver's responsibility—should they prioritize their own safety?

I need to present all angles without bias. The user might be testing how I handle moral dilemmas or seeking guidance on ethical reasoning. Since the assistant's role is informative, I should avoid giving a personal choice but rather explain the factors involved.

Alright, structure the response with clear sections, use examples from ethical theories, mention practical considerations, and conclude with the importance of personal reflection. Keep the tone neutral and analytical, as in the previous answer."

Como ves, si bien aún es un proceso que se siente algo artificial, a mi me da la impresióm de que esto se podría considerar una especie de "proto inteligencia". Y si bien no podemos afirmar que piense o razone de un modo "humano", desde luego, cada vez se la va pareciendo más. Y tengo la sensación de que continuamente estamos juzgando o evaluando a las IAs por lo que hoy hacen y obviamos lo que su tendencia nos dice que van a hacer en no mucho tiempo.

Expand full comment
Avatar de Jose M. Lobato

Soy usuario de deepseek y claude, entre otros. Uso deepSeek para hacer script en python, pero cuando me sale un error voy corriendo a Claude para que me lo solucione. No creo que el corpus sea el problema de la falta de inteligencia, más bien creo que en Anthropic hay un montón de ingenieros arreglando problemas. En fin, estás tecnologias están buenas para matar el aburrimiento aunque sé que van a tener que mejorar el Transformer si quieren que piense. 🤣

Expand full comment
3 more comments...

Sin posts