Wikipedia:Village pump (policy)
Policy | Technical | Proposals | Idea lab | WMF | Miscellaneous |
- If you want to propose something new that is not a policy or guideline, use Village pump (proposals). For drafting with a more focused group, you can also start on the talk page for a WikiProject, Manual of Style, or other relevant project page.
- If you have a question about how to apply an existing policy or guideline, try one of the many Wikipedia:Noticeboards.
- If you want to ask what the policy is on something, try the Help desk or the Teahouse.
- This is not the place to resolve disputes over how a policy should be implemented. Please see Wikipedia:Dispute resolution for how to proceed in such cases.
- If you want to propose a new or amended speedy deletion criterion, use Wikipedia talk:Criteria for speedy deletion.
Please see this FAQ page for a list of frequently rejected or ignored proposals. Discussions are automatically archived after remaining inactive for two weeks.
We need to fix the admin recall process
[edit]Right now only "recall" votes count, and those opposing recall don't count for anything, nor do any points made in the discussion. So 25 quick group-think / mob thumbs-down votes and even be best admmin can get booted. And the best (= the most active) are the ones most likely to get booted. An admin that does near zero will get zero votes to recall. And with a single regular RFA currently the only way back in (which we've seen, very few want to go through) "booted" is "booted". The fix would be to have a discussion period pror to voting, with both "recall" and "don't recall" choices. And then say that the recall has occurred (thus requiring rfa) if over 50% or 60% of those voting said "recall".
Sincerely, North8000 (talk) 20:40, 19 November 2024 (UTC)
- @North8000 Please see Wikipedia:Administrator recall/Reworkshop, where editors are already discussing potential changes. Sam Walton (talk) 20:43, 19 November 2024 (UTC)
- Thanks. I looked for something like that but I guess I didn't look hard enough. I hope others look harder than me. :-) North8000 (talk) 21:58, 19 November 2024 (UTC)
- I don't think you understand how recall works. An admin is only desysopped after the RRFA, not after the 25 signatures, unless they choose to resign on their own. You're asking to hold a vote on whether or not a vote should be held. ~~ Jessintime (talk) 20:55, 19 November 2024 (UTC)
- Yes, I understood that and that is integrated into my comment above. Unless they go through and succeed at an RFA they are gone. North8000 (talk) 21:54, 19 November 2024 (UTC)
- I've never heard of a petition that lets people sign because they don't support it. And I'll add that between the two recall petitions that were enacted to this point, both were preceded by many, many attempts to get the admin to correct course over the years despite egregious misconduct. Thebiguglyalien (talk) 21:03, 19 November 2024 (UTC)
- I'm not talking about any particular cases. Sincerely, North8000 (talk) 21:56, 19 November 2024 (UTC)
- So, the premise of your argument is pure conjecture? Regards, Goldsztajn (talk) 22:05, 19 November 2024 (UTC)
- ???? It was from an analysis of it's current structure. North8000 (talk) 14:10, 20 November 2024 (UTC)
- But you've just refused to engage in a discussion with how the structure has actually worked in practice; hence, conjecture. Regards, Goldsztajn (talk) 00:19, 21 November 2024 (UTC)
- ???? It was from an analysis of it's current structure. North8000 (talk) 14:10, 20 November 2024 (UTC)
- So, the premise of your argument is pure conjecture? Regards, Goldsztajn (talk) 22:05, 19 November 2024 (UTC)
- I'm not talking about any particular cases. Sincerely, North8000 (talk) 21:56, 19 November 2024 (UTC)
- The process at the moment does have a certain level of redundancy, with the recall and reconfirmation RFA being separate things. The reconfirmation RFA is even a standard RFA, as it has different criteria for success.
- I'm not sure if anything should be done yet, as it's still very early in its adoption. However if the situation occurs that a petition is successful but the reconfirmation RFA SNOWs, it could indicate that adjustments needs to be made so that community time isn't wasted. That speculative at the moment though. -- LCU ActivelyDisinterested «@» °∆t° 23:53, 19 November 2024 (UTC)
- The recall petition threshold is not the recall discussion - it is just a check to prevent the most frivolous recall discussions from being held. — xaosflux Talk 00:56, 20 November 2024 (UTC)
- The optics of this look alltogether terrible from my observation. I don't edit much, but I like reading a lot. Every criticism of the recall process i've seen so far just looks like old established admins thinking they might be next and having anxiety about that.
- The problem of something like this is that the optics are terrible. If anyone who doesn't know you reads that, the conclusion they will draw will likely not be "this recall process is terrible" and more likely go along the lines of "wow this is a lot of admins who don't have the community's trust anymore and want to dodge accountability".
- By being so vocally against any form of community led accountability, you're strenghtening the case for easy recalls and low thresholds, not weakening it.
- Specifically regarding Fastily, I'll make no comment on whether or not he deserves to still be an admin or not, I don't know him well enough for that and haven't reviewed enough of his contributions, but the arguments of "ANI agreed that no sanctions were appropriate" sound a lot like "our police department has investigated itself and found nothing was wrong". You have to see how this comes across, it's eroding trust in Admins on the whole project right now. Magisch talk to me 09:24, 20 November 2024 (UTC)
- Specifically, if RFA is so toxic that nobody wants to do it, that needs to be reformed. But the recent amount of vitriol towards a process that only kickstarts having to prove that you retain community trust has me convinced that there should be automatic mandatory RRFAs for every admin every 2 years or so.
- If, as of today, you don't believe the community would entrust you with admin tools, why do you think you should still have them? The criteria for losing them should not be "has clearly abused them", it should be "wouldn't be trusted with them if asked today". Magisch talk to me 09:33, 20 November 2024 (UTC)
- As an admin actively working to improve the recall process, my goal is to make it as fair as possible to all parties. That means it should not be possible to subject an admin to the process frivolously while equally making it possible to recall administrators who have lost the trust of the community, and it needs to be as non-toxic as possible, because even administrators who are actively abusing their tools are people and nobody deserves 1-2 months of abuse. It's also incorrect to describe ANI as a police department investigating itself - everybody engaging in good faith is welcome to comment there, regardless of whether they are an admin or not. Thryduulf (talk) 11:15, 20 November 2024 (UTC)
- @Thryduulf It's the Administrator's Noticeboard, naturally the vast majority of participants will be either admins or people who are involved in the same work.
- I don't think asking an admin to confirm they still retain the trust of the community (the whole basis of giving out admin tools to begin with) is ever really frivolous. The current process allows that at most once a year. If an admin had to stand for RFA every year, that might be a bit too much long term, but really, if any admin thinks they would not pass RRFA today, why should they retain their tools.
- Also, the sheer optics of it being mostly (from what i've seen) established admins calling this process toxic are terrible. Anyone who doesn't know anything about this process will see this as some kind of thin blue line mentality in the admin corps - and might conclude that it is time to desysop the majority of old admins to dissolve the clique.
- I wouldn't be surprised if we see a bunch of recall petitions for the most vocal critics of this process. Magisch talk to me 11:27, 20 November 2024 (UTC)
- I have no horse in this race, except that I regret not seeing the RFA earlier so I could have voted Support, sorry about that.
- But if your argument is optics, then having a bunch of recall petitions for the people who most vocally expressed a valid opinion on an evolving policy is absolutely awful optics. At best. Gnomingstuff (talk) 01:33, 22 November 2024 (UTC)
- As an admin actively working to improve the recall process, my goal is to make it as fair as possible to all parties. That means it should not be possible to subject an admin to the process frivolously while equally making it possible to recall administrators who have lost the trust of the community, and it needs to be as non-toxic as possible, because even administrators who are actively abusing their tools are people and nobody deserves 1-2 months of abuse. It's also incorrect to describe ANI as a police department investigating itself - everybody engaging in good faith is welcome to comment there, regardless of whether they are an admin or not. Thryduulf (talk) 11:15, 20 November 2024 (UTC)
- I took the stats from the first RRfA to test this theory:
Support | Oppose | Total | |
---|---|---|---|
Administrators | 48 | 29 | 77 |
Non-admins | 71 | 116 | 187 |
Total | 119 | 145 | 264 |
- Administrators made up 29% of the voters. If being an admin doesn't influence anyone's vote, then we can expect admins to make up roughly 29% of the supporters and 29% of the opposers. But this didn't happen. In the final results, administrators made up 40% of the supporters and 20% of the opposers. We can also look at the individual odds of supporting/opposing depending on user rights. It ended at 45% support, so you'd expect admins to have a 45% chance of supporting and a 55% chance of opposing. But this also didn't happen. If you choose any admin at random, they had a 62% chance of supporting and a 38% chance of opposing (ignoring neutrals). Non-admins were the opposite: they had a 38% chance of supporting and a 62% chance of opposing.
- So our next question should be why it was so much more likely for an admin to support the RRfA relative to a non-admin. The obvious answer is of course as you said: admins have a perverse incentive to support here, especially if they're not-so-great admins who know they probably don't have the trust of the community anymore. Also suggested during the RRfA is the comradery that comes from working alongside a fellow admin for so long. I'd be interested in seeing how account age affects likelihood of supporting, but that's not something that can be counted up in a few minutes like admin status. Thebiguglyalien (talk) 17:48, 20 November 2024 (UTC)
- I believe it may be centered on the idea that we all make mistakes, and many of us like to think we'd be given a chance to grow and learn from said mistake, instead of being forced through the RfA process again. But I recognize I may be being overly optimistic on that, and that others may not have the same thoughts on the matter that I do. Many admins I've spoken to would simply choose to give up their tools as opposed to go through an RfA again, something I've also considered despite my relatively smooth RfA. I'm also not sure Graham is the best representation of that. I voted support, recognizing that Graham87 has made mistakes, but also recognizing the significant contributions they've made and their pledge to do better. Bluntly, I did so expecting the vote to fail, and wanting to show some moral support and appreciation for their work. There's certainly a psychological aspect involved in it, but I don't think that, generally speaking, those of us who voted support or have issues with the current process are doing so out of self preservation.
- There's a lot of numbers that could be analyzed, such as the history of those admins who vote at RfA (whether they often vote support or don't vote at all), but it's hard to draw meaningful conclusions from this small of a dataset. Hey man im josh (talk) 19:14, 20 November 2024 (UTC)
- On paper, I get that. The thing is, I don't know whether you saw Levivich's comment or bradv's comment, but you'd be hard-pressed to find a less appropriate time to test the "chance to grow" theory than the absolutely deplorable behavior that we saw from Graham for many years with far too many chances to improve. If it were down to me, this should have been a block in 2023 rather than a desysop in 2024. Thebiguglyalien (talk) 19:32, 20 November 2024 (UTC)
- I'm late to the discussion, but I think it's also worth pointing that only 7 of the 25 users who signed Graham87's petition and 2 of the 25 on Fastily's were admins. ~~ Jessintime (talk) 13:16, 23 November 2024 (UTC)
- I would add that there is a potential wrinkle in this analysis. I'm an extended-confirmed user here (and thus would likely be counted as a non-admin), but I am a sysop on Commons so I would have my own perspective on the matter. Abzeronow (talk) 21:06, 22 November 2024 (UTC)
- So our next question should be why it was so much more likely for an admin to support the RRfA relative to a non-admin. The obvious answer is of course as you said: admins have a perverse incentive to support here, especially if they're not-so-great admins who know they probably don't have the trust of the community anymore. Also suggested during the RRfA is the comradery that comes from working alongside a fellow admin for so long. I'd be interested in seeing how account age affects likelihood of supporting, but that's not something that can be counted up in a few minutes like admin status. Thebiguglyalien (talk) 17:48, 20 November 2024 (UTC)
- Well, I'm not an admin and I started this thread. I'm all for having an admin recall process by the community in place. I'm also also for a process for course correction by the community in areas where and admin has drifted off course but where the problem is fixable. Administrative Action Review has the potential to become this but that has been stymied by various things. Sincerely, North8000 (talk) 14:24, 20 November 2024 (UTC)
- I think, fundamentally, the problem is that admins have a direct and concrete conflict of interest in this discussion. Of course an admin would be naturally opposed to more mechanisms that might make them lose their permissions, especially since desysops are very rare at the moment.
- I also don't really agree that the current recall process is all that toxic. You could get rid of the discussion section, as the recall is only a petition, not a consensus discussion, but that's about it. Magisch talk to me 18:33, 20 November 2024 (UTC)
Of course an admin would be naturally opposed to more mechanisms that might make them lose their permissions
– I wholeheartedly disagree with this assertion. There's a number of us that fully support a recall process, including quite a few people who have historically been open to recalls. This is an over simplification of the motives of a large group of experienced editors, many of which have legitimate and reasonable concerns about the process in its current form. Hey man im josh (talk) 19:15, 20 November 2024 (UTC)- Substantially all criticism i've seen so far of the process have boiled down to "RFA is abusive and it's unreasonable to make people go through that again". And yet, instead of attempting to change that, the only suggestions seem to be to support older admin's rights to have their permissions continue being grandfathered in. Magisch talk to me 19:27, 20 November 2024 (UTC)
- I'm sorry that that's all you've taken away from the vast amounts of criticism given by people. Perhaps consider focusing on whether the process, in its current state, makes sense instead of focusing on older admins. I'm a relatively new admin and I don't support the current iteration of the process. Hey man im josh (talk) 19:30, 20 November 2024 (UTC)
- I think it's eminently sensible to have adminship not be a lifetime appointment, both by the fact that norms change even when people dont, and that I see people in every RFA expressing reluctance over granting lifetime tools. I also think that assuming RFA isn't a big deal regular reconfirmations make sense. IFF RFA is a big deal, then the focus should be on fixing that.
- It seems to me that existing admins being immune to having to suffer RFA again has created a lack of pressure to actually make it into a functional, nontoxic process.
- Take my opinion for what it's worth though. I'm not an admin nor do I foresee myself ever having aspirations to become one. Magisch talk to me 19:43, 20 November 2024 (UTC)
- Attempting to improve RFA is a very hard problem that people have been working on since before you joined Wikipedia, and are still working on it. I would also say that
it is unreasonable to make people go through that again
is a mischaracterisation of the views expressed, which areit is unreasonable to make people go through that again unnecessarily
, which is significantly different. Thryduulf (talk) 19:31, 20 November 2024 (UTC)- I just found out about this discussion, and it looks to me like the same or similar things are being discussed in way too many different places. Anyway, I'm someone who has stated repeatedly and strongly in multiple places that I think the recall process is a disaster, and is beyond repair. And, contra some statements above, here are some other facts about me. I'm not an admin. I opposed Graham's re-RfA. And I played a central role in WP:CDARFC. --Tryptofish (talk) 20:12, 20 November 2024 (UTC)
- I'm sorry that that's all you've taken away from the vast amounts of criticism given by people. Perhaps consider focusing on whether the process, in its current state, makes sense instead of focusing on older admins. I'm a relatively new admin and I don't support the current iteration of the process. Hey man im josh (talk) 19:30, 20 November 2024 (UTC)
- Substantially all criticism i've seen so far of the process have boiled down to "RFA is abusive and it's unreasonable to make people go through that again". And yet, instead of attempting to change that, the only suggestions seem to be to support older admin's rights to have their permissions continue being grandfathered in. Magisch talk to me 19:27, 20 November 2024 (UTC)
- I would be against it for a different reason: if we allow both supports and opposes, then the recall petition becomes a mini-RfA with the same amount of pressure as the RRfA itself (especially since, given the identical threshold, the recall's result would be indicative of the RRfA's subsequent result). Since anyone can start the recall petition, it functionally means that anyone can force an admin to re-RfA, which is clearly worse.
On the other hand, having a set number of supports needed provides for a "thresholding" of who can open a RRfA, while not necessarily being as stressful. If anything, I would say the recall should become more petition-like (and thus less stressful for the recalled admin), rather than more RfA-like. Chaotic Enby (talk · contribs) 20:01, 20 November 2024 (UTC)
- The ones most likely to be booted are bad admins who are abusive toward the editor community and who negatively represent themselves as admins. Both of the recalls thus far were just exact examples of that and worked perfectly as designed and needed. The process worked exactly as desired and removed bad admins who deserved to be desysopped. Though I do think the discussion section of the petitions should be more regulated. Discussion should be about the admin's actions and conduct and nothing else. Any extraneous commentary should be removed. SilverserenC 00:23, 21 November 2024 (UTC)
- When I first started editing Wikipedia almost 20 years ago, I was struck by what, to me at least, appeared to be widespread incivility. Among a number of things which have changed for the better IMHO is an all round expectation that everyone's standards of behaviour should rise (and they have). The admin role breeds a certain "culture" (for lack of a better term) akin to a conservationist, the role is to "protect" Wikipedia from "harm" and I can certainly see why being an admin could be a deeply frustrating experience. However, what has happened, I think, in the attrition of the admin corps, and the turnover in the non-admin corps, is that the generalised culture of "regular" non-admin editors has moved further forward towards less acceptance of a culture prevalent 10-15 years ago. I think also the rise in editors from non-English speaking backgrounds and from the Global South has caused complexities for those with limited experience outside the anglosphere. The statistics above on the vote for G87's RRFA show an interesting split between admins and non-admins, and within admins. Non-admins were almost overwhelmingly (close to 2/3) of the view that G87 had been given an almost exceptionaly long period to improve, had not, and no longer held their trust. 5/8s of admins, appeared (and comments here also seem to confirm this) split between solidarity for one of their own and displeasure with the recall process. 3/8s admins were in alignment with the majority of non-admins. FWIW, I'm not trying to point to some grand schism; A 38/62 admin split on these numbers is not that profound - if just 9 admins had changed their vote from support to oppose it would have been a 50/50 split. To reiterate, I'm not suggesting that there is a great gap between admins and non-admins, but there does appear to be some gap when it comes to generalised views around the expected behaviour of admins. Regards, Goldsztajn (talk) 01:01, 21 November 2024 (UTC)
- Maybe the divide is not between admins and non-admins but between newer and longer-serving editors (who are more likely to be admins)? Hawkeye7 (discuss) 01:20, 21 November 2024 (UTC)
- I don't disagree, and in effect I was sort of saying the same thing in terms of the attrition of the admin corps and turnover in non-admin corps. FWIW, I do think there are some generalised feelings about admins among non-admins; for example, admins are less likely to face sanction than non-admins. How true that actually is I'm not sure and the point would be that a group of people already tested in commnuity trust (ie RFA) are less likely to breach that trust. However, comments in the G87 RRFA and the strength of the vote suggest there are (wrongly or rightly) widely felt perceptions of disparity. Regards, Goldsztajn (talk) 01:53, 21 November 2024 (UTC)
- I'm currently compiling the data to get some statistics about voters in Graham's re-RFA. I'm a bit less than halfway through so it might be a couple of days before I can present any results. However among the first 113 support voters the maximum account age (on the day the re-RFA started) was 7919 days (21 years), the minimum was 212 days and the average was 4785 days (13 years). I have no data yet for neutral or oppose voters so cannot say how that compares. Thryduulf (talk) 02:03, 21 November 2024 (UTC)
- Do you have a handy list of all voters for RFA? It should be simple enough to use a WP:QUARRY to find out all details about the voters if someone finds an easy enough scrape of who each user is Soni (talk) 05:51, 21 November 2024 (UTC)
- @Soni: [1]. Levivich (talk) 07:09, 21 November 2024 (UTC)
- Here's the Quarry query editcount/registration date for Supports, Neutrals, Opposes.
- I think about 6 editors were missed by the tool you linked, but it should not change overall patterns much so we can just use this as is. Soni (talk) 07:24, 21 November 2024 (UTC)
- Prepare to not be surprised. Supporters/Opposers:
- Median registration date 2008/2014 <-- Behold, Wikipedia's generational shift
- Average registration date: 2011/2014
- Median edit count: 40,293/17,363
- Average edit count: 76,125/43,683
- Thanks for doing the quarry. Teamwork makes the dream work! Levivich (talk) 05:17, 22 November 2024 (UTC)
- Prepare to not be surprised. Supporters/Opposers:
- @Soni: [1]. Levivich (talk) 07:09, 21 November 2024 (UTC)
- At a quick glance, it seemed like editors with more edits were more likely to support while editors with fewer edits (with one exception) were more likely to oppose. - Enos733 (talk) 07:54, 21 November 2024 (UTC)
- Given a single admin action may involve multiple edits, it's not so surprising the supporters' list possibly reflects a group with higher edit counts. Personally, I'd be more inclined to draw conclusions from length of registration rather than edit count. Regards, Goldsztajn (talk) 09:11, 21 November 2024 (UTC)
- my very, very rapid count - supports 35/117 (30%) less than 10 years old, opposes 67/141 (48%) less than 10 years old. In absolute numbers, 10+ year accounts were 82 supports, 74 opposes - actually quite even. What was crucial was younger accounts. It does confirm my sense of gaps between "older" and "younger" generations in regard to perceptions of tolerable admin behaviour. Regards, Goldsztajn (talk) 09:50, 21 November 2024 (UTC)
- Given a single admin action may involve multiple edits, it's not so surprising the supporters' list possibly reflects a group with higher edit counts. Personally, I'd be more inclined to draw conclusions from length of registration rather than edit count. Regards, Goldsztajn (talk) 09:11, 21 November 2024 (UTC)
- Do you have a handy list of all voters for RFA? It should be simple enough to use a WP:QUARRY to find out all details about the voters if someone finds an easy enough scrape of who each user is Soni (talk) 05:51, 21 November 2024 (UTC)
- Maybe the divide is not between admins and non-admins but between newer and longer-serving editors (who are more likely to be admins)? Hawkeye7 (discuss) 01:20, 21 November 2024 (UTC)
- When I first started editing Wikipedia almost 20 years ago, I was struck by what, to me at least, appeared to be widespread incivility. Among a number of things which have changed for the better IMHO is an all round expectation that everyone's standards of behaviour should rise (and they have). The admin role breeds a certain "culture" (for lack of a better term) akin to a conservationist, the role is to "protect" Wikipedia from "harm" and I can certainly see why being an admin could be a deeply frustrating experience. However, what has happened, I think, in the attrition of the admin corps, and the turnover in the non-admin corps, is that the generalised culture of "regular" non-admin editors has moved further forward towards less acceptance of a culture prevalent 10-15 years ago. I think also the rise in editors from non-English speaking backgrounds and from the Global South has caused complexities for those with limited experience outside the anglosphere. The statistics above on the vote for G87's RRFA show an interesting split between admins and non-admins, and within admins. Non-admins were almost overwhelmingly (close to 2/3) of the view that G87 had been given an almost exceptionaly long period to improve, had not, and no longer held their trust. 5/8s of admins, appeared (and comments here also seem to confirm this) split between solidarity for one of their own and displeasure with the recall process. 3/8s admins were in alignment with the majority of non-admins. FWIW, I'm not trying to point to some grand schism; A 38/62 admin split on these numbers is not that profound - if just 9 admins had changed their vote from support to oppose it would have been a 50/50 split. To reiterate, I'm not suggesting that there is a great gap between admins and non-admins, but there does appear to be some gap when it comes to generalised views around the expected behaviour of admins. Regards, Goldsztajn (talk) 01:01, 21 November 2024 (UTC)
We have had two recalls as of now. The people signing the recall were by and large not trolls, vandals, people blocked by that admin, ... but regular editors in good standing and without a grudge. One of these recalls has been supported by the RRFA afterwards, and the other admin decided not to go for a RRFA. There is zero evidence that the process is flawed or leads to results not wanted by the community at large. While minor issues need working out (things like "should it be closed immediately the moment it reaches 25 votes or not"), the basic principles and method have so far not produced any reason to fundamentally "fix" the issue. That the process highlights a gap between parts of the community (see e.g. the Graham RRFA) doesn't mean that the process needs fixing. The process only would need fundamental fixing if we would get successful recalls which would then be overwhelmingly reversed at RRFA, showing that the recall was frivolous, malicious, way too easy... Not now though. Fram (talk) 09:24, 22 November 2024 (UTC)
- I agree with Fram. There is not any evidence that the recall process is reaching outcomes that are not supported by the Community (I voted Oppose on the Graham RRFA; I don't know how I would have voted on a Fastily RRFA). Small fixes to the process if supported would not be indicative of the process itself being fundamentally flawed. Abzeronow (talk) 21:15, 22 November 2024 (UTC)
- I agree that it just needs fixes.North8000 (talk) 15:24, 23 November 2024 (UTC)
I believe that desysoppings for cause should only happen when there is objective evidence of misconduct. My main concern about the recall process is that it may be wielded against administrators who are willing to take actions that are controversial, yet necessary. Examples of actions that have got administrators hounded include (1) closing contentious and politically charged AFD discussions; (2) blocking an "WP:UNBLOCKABLE" editor who is being disruptive or making personal attacks; (3) stepping up to protect a politically charged article to stop an edit war. None of these actions are administrator misconduct, but in a heated dispute the side that has an admin rule in their disfavor may quickly resort to punishing said administrator by starting a recall petition, and in a dispute involving many editors, getting to 25 may be easy. Even if that petition fails, it is so unpleasant that it may have a chilling effect on admin involvement even when needed. Sjakkalle (Check!) 21:14, 23 November 2024 (UTC)
- In which case, a RRFA might be overwhelmingly in favor of the administrator and thus vindicate the administrator. I would definitely vote in support of an administrator if those any of those three were the impetus behind a recall. I also trust our editors, and so far, the recall process has worked as intended. Abzeronow (talk) 21:50, 23 November 2024 (UTC)
- ArbCom have to face re-election. Does that have a chilling effect on the arbitrators? Hawkeye7 (discuss) 21:48, 23 November 2024 (UTC)
- That's a facile argument. Arbitrators are well aware that they are standing for a fixed term period. Black Kite (talk) 21:50, 23 November 2024 (UTC)
- It's driving me up the wall that people keep saying that the process has worked as intended. Come back and tell me that, after you can link to an RRfA for Fastily that resulted in whatever result you define as working as intended. --Tryptofish (talk) 22:01, 23 November 2024 (UTC)
- Choosing not to do an RRfA was their own choice, particularly if Fastily thought it wouldn't be successful. It was also their choice to make no attempt whatsoever to defend the reams of evidence presented against them in the recall petition of their negative actions toward the editing community. So, yes, Fastily as well was an example of the process working as intended. SilverserenC 22:08, 23 November 2024 (UTC)
- Or perhaps they just thought "well, I've put XX years into this and a load of random people with rationales ranging from reasonable to utterly non-existent have told me I'm not fit to do it, so f*** you". If that's the case, I don't blame them. Black Kite (talk) 22:13, 23 November 2024 (UTC)
- Maybe, maybe not. Probably not though right? Seems kind of silly. PackMecEng (talk) 22:17, 23 November 2024 (UTC)
- I suspect that might be my reaction, to be honest. Black Kite (talk) 22:24, 23 November 2024 (UTC)
- He was going to lose if he didn't apologize, and he didn't want to apologize. That simple. As others have said, that was his choice to make, and I respect it. Levivich (talk) 22:28, 23 November 2024 (UTC)
- Except that he did apologize, although there were differing views of whether that apology was enough. This oversimplification is what's wrong with the way discussions happen in this process. --Tryptofish (talk) 22:34, 23 November 2024 (UTC)
- He woulda had to apologize more, then, including for the stuff that came out during the petition, and any other stuff that may have come out during the RRfA. He woulda had to answer questions about it, make promises, etc., basically go through what Graham went through, and realize that even that (answering questions, making promises) might not be enough (as it wasn't for Graham). It's not at all irrational for someone to choose not go through that. Being an admin isn't worth all that to some (e.g., to me), especially if you might not get it despite your best efforts. Levivich (talk) 22:44, 23 November 2024 (UTC)
- "Someone decided that it just isn't worth it" does not equal "the process worked". --Tryptofish (talk) 22:47, 23 November 2024 (UTC)
- No, those two things are not the same. If you want to know why I think the process worked, it's because it stopped disruption, did it faster than Arbcom, and I think with less drama (though admittedly the third one is purely subjective and speculative). Levivich (talk) 22:56, 23 November 2024 (UTC)
- Um, thanks for sharing? --Tryptofish (talk) 23:06, 23 November 2024 (UTC)
- No, those two things are not the same. If you want to know why I think the process worked, it's because it stopped disruption, did it faster than Arbcom, and I think with less drama (though admittedly the third one is purely subjective and speculative). Levivich (talk) 22:56, 23 November 2024 (UTC)
- "Someone decided that it just isn't worth it" does not equal "the process worked". --Tryptofish (talk) 22:47, 23 November 2024 (UTC)
- He woulda had to apologize more, then, including for the stuff that came out during the petition, and any other stuff that may have come out during the RRfA. He woulda had to answer questions about it, make promises, etc., basically go through what Graham went through, and realize that even that (answering questions, making promises) might not be enough (as it wasn't for Graham). It's not at all irrational for someone to choose not go through that. Being an admin isn't worth all that to some (e.g., to me), especially if you might not get it despite your best efforts. Levivich (talk) 22:44, 23 November 2024 (UTC)
- Except that he did apologize, although there were differing views of whether that apology was enough. This oversimplification is what's wrong with the way discussions happen in this process. --Tryptofish (talk) 22:34, 23 November 2024 (UTC)
- He was going to lose if he didn't apologize, and he didn't want to apologize. That simple. As others have said, that was his choice to make, and I respect it. Levivich (talk) 22:28, 23 November 2024 (UTC)
- I suspect that might be my reaction, to be honest. Black Kite (talk) 22:24, 23 November 2024 (UTC)
- Maybe, maybe not. Probably not though right? Seems kind of silly. PackMecEng (talk) 22:17, 23 November 2024 (UTC)
- On the petition page, I conducted a careful analysis of the evidence. Nobody refuted what I said there. --Tryptofish (talk) 22:15, 23 November 2024 (UTC)
- Linking might help though. It doesn't seem to be on Wikipedia talk:Administrator recall/Graham87, Wikipedia talk:Administrator recall/Fastily, or on Wikipedia talk:Administrator recall, so it's a bit hard to know what "the petition page" is. Do you mean your 00:39, 13 November 2024 (UTC) reply to A smart kitten? The one that ended with "Does this rise to the level of requiring, for me, a desysop? I'm leaning towards no." And others leaned towards "yes", it's not as if people couldn't draw different conclusions from your post or could disagree with things you said without actually replying directly to you. You didn't contradict the evidence, you personally didn't find it severe or convincing enough, that's all. That doesn't show that the process needs fixing though, just because enough people disagreed with your opinion and the result wasn't put to the test. Fram (talk) 09:28, 25 November 2024 (UTC)
- Fram, the context of what I said was clearer before there were all those intervening edits, but yes, you correctly identified the post I meant as the one that ended with the words that you quoted. Here's the diff: [2]. From where I'm sitting, your analysis here of how people reacted to what I posted is, well, not convincing enough. There was a lot of discussion about the evidence that I analyzed, back and forth. When the editor (A smart kitten) who originally posted the evidence came back with the additional information that I requested, the discussion was still very active. I provided a very detailed examination, point-by-point, of each individual claim made in that evidence. Yes, it was based upon my opinions, but I drew specific conclusions, and justified those conclusions. And nobody came back and said that they thought anything in my analysis was incorrect, nor did anyone who signed on the basis of that evidence before my comment come back and reaffirm their signature, rejecting my analysis. If you think somebody actually did, you can provide a diff of it, but I can assure you that you won't find one. And that wasn't because the petition discussion had come to a close, because it continued for several more days after I posted that. After a whole lot of back-and-forth about that particular evidence, nobody said that they found errors in anything that I said. But a couple more editors did sign the petition after that, with brief comments saying, in some cases, that they decided to sign after reading that particular evidence.
- So the question, in the light of your comment to me, becomes whether those later signers did so because they carefully read all of the discussion, including my critique, and decided to sign, implicitly having decided that my critique was unconvincing – or whether they signed after only a superficial read and had never really engaged with my critique. I cannot prove that it was the latter, and you cannot prove that it was the former. But given that their signatures came only with brief comments, and nobody found reason to actually mention that they had rejected my critique, I'm pretty skeptical of the former. And that's a problem. The petition process does not, of course, require that anyone had to say explicitly that they disagreed with me, either, but that's a shortcoming of the discussion process. A desysop via ArbCom makes room for careful examination of the facts. The petition did not. This is a half-assed way of driving someone off Wikipedia. And I'm arguing for a more deliberative process. --Tryptofish (talk) 18:55, 25 November 2024 (UTC)
- Linking might help though. It doesn't seem to be on Wikipedia talk:Administrator recall/Graham87, Wikipedia talk:Administrator recall/Fastily, or on Wikipedia talk:Administrator recall, so it's a bit hard to know what "the petition page" is. Do you mean your 00:39, 13 November 2024 (UTC) reply to A smart kitten? The one that ended with "Does this rise to the level of requiring, for me, a desysop? I'm leaning towards no." And others leaned towards "yes", it's not as if people couldn't draw different conclusions from your post or could disagree with things you said without actually replying directly to you. You didn't contradict the evidence, you personally didn't find it severe or convincing enough, that's all. That doesn't show that the process needs fixing though, just because enough people disagreed with your opinion and the result wasn't put to the test. Fram (talk) 09:28, 25 November 2024 (UTC)
- Or perhaps they just thought "well, I've put XX years into this and a load of random people with rationales ranging from reasonable to utterly non-existent have told me I'm not fit to do it, so f*** you". If that's the case, I don't blame them. Black Kite (talk) 22:13, 23 November 2024 (UTC)
- Choosing not to do an RRfA was their own choice, particularly if Fastily thought it wouldn't be successful. It was also their choice to make no attempt whatsoever to defend the reams of evidence presented against them in the recall petition of their negative actions toward the editing community. So, yes, Fastily as well was an example of the process working as intended. SilverserenC 22:08, 23 November 2024 (UTC)
- I have to say I don’t get the recall process either. I support admin accountability but just having an arbitrary number of “support” votes, no “oppose” votes, and I guess a time limit instead of consensus forming seems… extremely weird and out of step with how virtually everything else is done on Enwiki. Dronebogus (talk) 10:56, 24 November 2024 (UTC)
- The intended point of the recall petition is not to find consensus or to determine whether the admin has lost the trust of the community, has abused the tools or anything like that. The intended point of the petition is only to prove that a re-RFA is not frivolous. The Re-RFA is where consensus is formed from support and oppose, analysis of evidence, etc. Think of it in judicial terms, the petition is at the pre-trial stage and simply aims to answer the question "are there 25 people who think there is a case to answer?" if the answer is no, then it ends there. If the answer is yes, then you can please innocent or guilty. If you plead guilty you take the sentence (desysopping) and move on. If you plead innocent there is a trial and the jury finds you either innocent or guilty by majority verdict. This is an imperfect analogy of course, but it hopefully helps explain the concept.
- It didn't work like that in either of the two that we've had, but that's a fault with the implementation not with the concept. Thryduulf (talk) 12:57, 24 November 2024 (UTC)
- The problem is, the concept itself makes no sense. Nearly everything on Wikipedia is decided one of three ways: consensus democracy that must be approved/vetoed by an admin (most non-trivial issues); WP:BOLD editing, informal discussion, or admin fiat (trivial issues); or arbitration (extreme fringe cases). This resembles none of those. It’s like arbitration, only everyone can be an arb, and instead of voting yay or nay to take the case you collect signatures to see if there’s general support for a case? Dronebogus (talk) 13:11, 24 November 2024 (UTC)
- The request stage of arbitration is the closest analogy, but it is indeed a process not used anywhere else on Wikipedia. That doesn't mean it doesn't make sense. It's sole purpose is intended to be a check against frivolous requests so that an admin doesn't have to go through re-RFA just because they pissed off a single editor once by making an objectively correct decision. The actual decision is intended to made by consensus democracy at the Re-RFA. Thryduulf (talk) 13:33, 24 November 2024 (UTC)
- I think a limited vote based on a formula like “after 7 days a minimum of 2/3rds of people must support for re-RFA” would be less opaque than trying to start a Wiki-Minyan? Dronebogus (talk) 09:26, 25 November 2024 (UTC)
- That sounds like skipping the petition, and going right to the RRFA, or running two successive RRFA's. I have not been involved in any of this but it is not really hard to understand why there is the two-step process of: 1) calling the question, and 2) deciding the issue. Alanscottwalker (talk) 11:52, 25 November 2024 (UTC)
- Honestly I think it should just go straight to RRFA, and if there’s enough opposition fast enough it can just be WP:SNOW closed. We don’t, for example, ask for 25 signatures to start and AfD discussion in order to weed out frivolous nominations— it’s patently obvious when a nomination is garbage in most cases. RRFA is clearly a last resort, and no established, good faith user is likely to abuse this kind of process so egregiously we need a two-step failsafe. Dronebogus (talk) 12:03, 25 November 2024 (UTC)
- In other words any user should be able to start a binding RRFA on any admin at any time? No, no thank you... – Joe (talk) 12:16, 25 November 2024 (UTC)
- Not any time, there should be a policy that steps must already been taken and failed, ideally multiple times, similar to ArbCom. And not any user, since the starter should probably be autoconfirmed at the absolute minimum, and probably be required to be in goof standing, have X edits, been on WP X years, and been active during the last year. If it was unambiguously required that an RRFA follow these rules or be rejected (with filing an improper case being a sanctionable offense) I don’t think anyone would realistically start a frivolous case. Dronebogus (talk) 12:33, 25 November 2024 (UTC)
- Well, we also don't require a !vote to create an article but we do for an admin. I also don't think it is likely that 'any experienced user' has experience in making an RRFA -- Alanscottwalker (talk) 12:34, 25 November 2024 (UTC)
- An admin is essentially just voted into office; they should be voted out of office in an identical way. There’s no need for some kind of novel additional process on top of that. That’s all I’m saying. Dronebogus (talk) 12:55, 25 November 2024 (UTC)
- In other words any user should be able to start a binding RRFA on any admin at any time? No, no thank you... – Joe (talk) 12:16, 25 November 2024 (UTC)
- Honestly I think it should just go straight to RRFA, and if there’s enough opposition fast enough it can just be WP:SNOW closed. We don’t, for example, ask for 25 signatures to start and AfD discussion in order to weed out frivolous nominations— it’s patently obvious when a nomination is garbage in most cases. RRFA is clearly a last resort, and no established, good faith user is likely to abuse this kind of process so egregiously we need a two-step failsafe. Dronebogus (talk) 12:03, 25 November 2024 (UTC)
- That sounds like skipping the petition, and going right to the RRFA, or running two successive RRFA's. I have not been involved in any of this but it is not really hard to understand why there is the two-step process of: 1) calling the question, and 2) deciding the issue. Alanscottwalker (talk) 11:52, 25 November 2024 (UTC)
- I think a limited vote based on a formula like “after 7 days a minimum of 2/3rds of people must support for re-RFA” would be less opaque than trying to start a Wiki-Minyan? Dronebogus (talk) 09:26, 25 November 2024 (UTC)
- The request stage of arbitration is the closest analogy, but it is indeed a process not used anywhere else on Wikipedia. That doesn't mean it doesn't make sense. It's sole purpose is intended to be a check against frivolous requests so that an admin doesn't have to go through re-RFA just because they pissed off a single editor once by making an objectively correct decision. The actual decision is intended to made by consensus democracy at the Re-RFA. Thryduulf (talk) 13:33, 24 November 2024 (UTC)
- The problem is, the concept itself makes no sense. Nearly everything on Wikipedia is decided one of three ways: consensus democracy that must be approved/vetoed by an admin (most non-trivial issues); WP:BOLD editing, informal discussion, or admin fiat (trivial issues); or arbitration (extreme fringe cases). This resembles none of those. It’s like arbitration, only everyone can be an arb, and instead of voting yay or nay to take the case you collect signatures to see if there’s general support for a case? Dronebogus (talk) 13:11, 24 November 2024 (UTC)
- I think the basic complaint here is that the 25-vote threshold is too easy to meet, and therefore it is unfair to require an affirmative consensus for the admin to retain the tools. I think the 25-vote threshold is fine for weeding out frivolous nominations, but correspondingly I think we should make it harder to remove adminship, i.e. make 50-60% the discretionary range for removing adminship. This would make it in line with most of our other processes, where a slight supermajority is required to make changes, and no consensus defaults to the status quo. Whereas under the current recall system, 25 votes with no opportunity to object are enough to make removal of adminship the status quo, which seems a bit harsh. -- King of ♥ ♦ ♣ ♠ 19:53, 25 November 2024 (UTC)
- I think the 25-vote threshold, because it’s so easy to meet, is essentially pointless because it will only weed out extreme outlier cases that I don’t believe will ever happen enough to be a serious concern. We should just have a supermajority vote requirement, and if we must have a petition it should be a lot higher than 25. Dronebogus (talk) 16:06, 27 November 2024 (UTC)
- We don't have evidence the 25-vote threshold is easy to meet. Of the two recalls, one only hit 25 due to a bad block during the petition period. CMD (talk) 16:14, 27 November 2024 (UTC)
- One more reason I don’t like this: it’s extremely important, but we’re using it to prototype this weird system not used anywhere else on Enwiki and possibly Wikimedia (if you have examples of off-wiki precedent please share them). Dronebogus (talk) 16:18, 27 November 2024 (UTC)
- Have to try new things at some point. But CMD is right, from all the evidence we do have, it looks about right. Where as there is zero evidence that a higher number is required or helpful. PackMecEng (talk) 17:09, 27 November 2024 (UTC)
- It's usually called Approval voting when it's used, though that might not be precisely the right name. It's used all over the Wikimedia movement. At least until recently, both grant requests and the (technical) community wishlist used petition-like voting processes that encouraged support and disregarded opposition votes. That is, if there were 25 people supporting something and you showed up to say "* Oppose because WMF Legal will have a heart attack if you do this", then the request might be rejected because of the information you provided, and your comment might change the minds of potential/future supporters, but it would never be counted as a vote of 25 to 1. It's still counted as a list of 25 supporters. WhatamIdoing (talk) 18:53, 27 November 2024 (UTC)
- The original Phase I Proposal was directly written as adapting dewiki's recall policies into enwiki. I believe the Italian wikipedia also has a threshold to RRFA style process. And I think spanish too? I might be getting some projects confused. But it's directly used in recall in other projects - That's how it was recommended here (and then adapted after). Soni (talk) 18:58, 27 November 2024 (UTC)
- It's usually called Approval voting when it's used, though that might not be precisely the right name. It's used all over the Wikimedia movement. At least until recently, both grant requests and the (technical) community wishlist used petition-like voting processes that encouraged support and disregarded opposition votes. That is, if there were 25 people supporting something and you showed up to say "* Oppose because WMF Legal will have a heart attack if you do this", then the request might be rejected because of the information you provided, and your comment might change the minds of potential/future supporters, but it would never be counted as a vote of 25 to 1. It's still counted as a list of 25 supporters. WhatamIdoing (talk) 18:53, 27 November 2024 (UTC)
- Arbitration election commissioners are chosen by collecting solely supporting statements. Once upon a time, the arbitration election RFCs also consisted of proposals that commenters approved, without any option to oppose. Requests for comments on user conduct also used a format where support for expressed viewpoints were collected, without opposing statements. edited 18:32, 4 December 2024 (UTC) to add another example isaacl (talk) 19:50, 27 November 2024 (UTC)
- @Dronebogus This system was modeled after Adminwiederwahl on the German Wikipedia, which has been in place since 2009 or so. --Ahecht (TALK
PAGE) 07:34, 2 December 2024 (UTC)- Interesting. Dronebogus (talk) 13:14, 2 December 2024 (UTC)
- That being said, different wikis have radically different governance structures. For example, Spanish Wikipedia is apparently much more democratic compared to Enwiki (in the literal sense, not just in the sense of “egalitarian” or “un-tyrannical”). Dronebogus (talk) 03:26, 4 December 2024 (UTC)
- It's worth noting dewiki primarily uses the process to desysop inactive admins and has a much longer petition period. Sincerely, Dilettante 18:12, 4 December 2024 (UTC)
- Have to try new things at some point. But CMD is right, from all the evidence we do have, it looks about right. Where as there is zero evidence that a higher number is required or helpful. PackMecEng (talk) 17:09, 27 November 2024 (UTC)
- One more reason I don’t like this: it’s extremely important, but we’re using it to prototype this weird system not used anywhere else on Enwiki and possibly Wikimedia (if you have examples of off-wiki precedent please share them). Dronebogus (talk) 16:18, 27 November 2024 (UTC)
- We don't have evidence the 25-vote threshold is easy to meet. Of the two recalls, one only hit 25 due to a bad block during the petition period. CMD (talk) 16:14, 27 November 2024 (UTC)
- I think the 25-vote threshold, because it’s so easy to meet, is essentially pointless because it will only weed out extreme outlier cases that I don’t believe will ever happen enough to be a serious concern. We should just have a supermajority vote requirement, and if we must have a petition it should be a lot higher than 25. Dronebogus (talk) 16:06, 27 November 2024 (UTC)
Comparing with de.Wiki maybe apples and oranges. Disclaimer: This is what I have come up with, but a regular de.Wiki user or admin may well be able to improve or correct my findings. First there is the huge difference in scale - the de.Wiki currently runs with only 175 admins. There are nearly 400 former admins (that’s quite a high turnover but recall replaced the earlier term limit system for admins which required automatic re-election), but also there is the question of culture: en.Wiki is a lingua franca project contributed by users from many different backgrounds and regions while the de.Wiki is largely contributed to from a specific language region that shares a common culture which defines their way of doing things such as the way their RfC (Meinungsbild) are structured, voted, and commented on. Since 2009 when the de.Wiki system was rolled out :
- There have been 247 recall cases
- There was a rush of 67 cases in the first year 2009
- Since 2018 there have been 30 cases, an average of 4.29 per year
Breakdown:
- 49 handed their tools in voluntarily after being RECALLED. (zurückgetreten)
- 59 were stripped of their tools following a RECALL case and failed on a rerun (Nicht wiedergewählt)
- 96 were stripped of their tools after the rerun time limit expired (Nach Fristablauf de-administriert/Did not run after being asked to run for re-election)
These figures do not add up because they leave 43 unaccounted for. I think this is because there are several different pages with breakdowns of admin activity. The 43 could be users that passed a recall RfA or they may have handed their tools in voluntarily on recall but I can't find way to know for certain. Kudpung กุดผึ้ง (talk) 23:37, 4 December 2024 (UTC)
- Just in case anyone didn’t get the subtext of my first comment on this: I do think it’s apples and oranges, and that’s why we shouldn’t be using it. Different language editions have such vastly different systems and community cultures they might as well be on other planets half the time. You can’t import stuff between them just because it fills the same niche. Dronebogus (talk) 00:29, 5 December 2024 (UTC)
- I agree that the situations are somewhat different, but it at least means its not unprecidented. Also, I know what you mean, but I'm still amused by the phrase
en.Wiki is a lingua franca project
. --Ahecht (TALK
PAGE) 20:19, 10 December 2024 (UTC)
I'm for there being an admin recall process. But we need to recognize that RFA, at it's realistic best is an inherently rough process that few want to go through, and if they don't do so they are gone. At it's best it's like standing on a pedestal for a week in the middle of a crowd while people ask questions and make public assessments about you. Including about anything that anyone feels they might have done wrong. I just think we need a more careful thoughtful process before we subject them to "RFA or out" North8000 (talk) 19:36, 11 December 2024 (UTC)
Topics on Jehova's Witnesses - article spamming issues
[edit]Polish Wikipedia is experiencing and uptick in Jehova's Witnesses topics article spamming, surrepticious edits pushing JW terminology etc. One of current problems is the spamming of separate articles for every "convention", which is an annual (I think) event with a theme and about 100k visitors. We are discussing their notability right now, and I was wondering whether English Wikipedia already discussed and cleaned this, which would be helpful? If you remember any topic discussing notability or monitoring of Jehova's Witnesses related topics, and possibly deleted articles. (I'm not sure if there is any sensible search method of deleted articles archive/log? Can I use any wildcards in Special:Log/delete? It doesn't seem to work.) Tupungato (talk) 12:04, 25 November 2024 (UTC)
- @Tupungato, we used to have a list of conventions, but it was deleted 16 years ago at Wikipedia:Articles for deletion/List of Jehovah's Witnesses conventions. I'm not sure we would make the same decision today. Information about some conventions is in History of Jehovah's Witnesses. WhatamIdoing (talk) 02:22, 27 November 2024 (UTC)
- @Tupungato: I'm probably one of the best people you could talk to about this. I've been trying to remove the emphasis on primary sources when JWs are talked about throughout enwiki. The Jehovah's Witnesses article used to cite the denomination's magazines 100+ times. I fixed that. Unfortunately I don't speak Polish but I have an extensive book collection on secondary sources about JWs if you ever wanted me to look something up for you. Clovermoss🍀 (talk) 14:09, 4 December 2024 (UTC)
- In regards to notability, we don't really have articles on individual conventions. I think a few are (or should be) mentioned at the History of Jehovah's Witnesses if secondary sources talked about them, but otherwise that sort of thing definitely wouldn't meet our notability guideline for standalone articles. I'm not sure what the standards at the Polish Wikipedia are because I know various projects have different standards. If you're looking for AfDs, the most recent one I can think of is Wikipedia:Articles for deletion/List of Watch Tower Society publications (2nd nomination). I've mostly been focusing on improving the content we have as there's only a handful of people editing the JW topic area and a lot of what was written a decade ago uses almost exclusively primary sources. Clovermoss🍀 (talk) 14:23, 4 December 2024 (UTC)
- Thank you for your reply. I was away for a week, but I'll have a look how the matters are progressing in Polish Wikipedia, and will remember about your offer to consult. Tupungato (talk) 09:06, 10 December 2024 (UTC)
- In regards to notability, we don't really have articles on individual conventions. I think a few are (or should be) mentioned at the History of Jehovah's Witnesses if secondary sources talked about them, but otherwise that sort of thing definitely wouldn't meet our notability guideline for standalone articles. I'm not sure what the standards at the Polish Wikipedia are because I know various projects have different standards. If you're looking for AfDs, the most recent one I can think of is Wikipedia:Articles for deletion/List of Watch Tower Society publications (2nd nomination). I've mostly been focusing on improving the content we have as there's only a handful of people editing the JW topic area and a lot of what was written a decade ago uses almost exclusively primary sources. Clovermoss🍀 (talk) 14:23, 4 December 2024 (UTC)
Can we hide sensitive graphic photos?
[edit]The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Can we hide sensitive graphic photos? I recently came across an article with a photo of a deceased man smiling right at the top—it was deeply disturbing, traumatizing, triggering, shocking, and sickening! This kind of content discourages many people who might otherwise want to read the article and could even provoke serious medical reactions, such as seizures. Imagine if that man's family came across the article and saw him like that, right in their face! Nobody seems to favor this policy, so why do we insist on keeping it? Arabic Wikipedia uses a collapsible template that lets readers choose whether to view such photos, without censoring informative media. Shouldn't we adopt a similar approach? ☆SuperNinja2☆ TALK! 21:41, 30 November 2024 (UTC)
- Not sure where you are getting that the image subject was dead at the time the image was taken. Just Step Sideways from this world ..... today 21:49, 30 November 2024 (UTC)
- I couldn't even think. I was totally shocked. Anyhow, my point still stand. ☆SuperNinja2☆ TALK! 21:51, 30 November 2024 (UTC)
- I don't see anything in the photo, Commons description, or CDC description that states the patient is deceased. Is there a chance this person is alive? –Novem Linguae (talk) 02:05, 5 December 2024 (UTC)
- I couldn't even think. I was totally shocked. Anyhow, my point still stand. ☆SuperNinja2☆ TALK! 21:51, 30 November 2024 (UTC)
- See HELP:NOSEE Lee Vilenski (talk • contribs) 21:50, 30 November 2024 (UTC)
- The issue is that an image one editor might find “disturbing, traumatizing, triggering and shocking” is an image another editor will find informative and helpful. We have no way to know how others will react. It would indeed be censorship to hide such images. Blueboar (talk) 21:50, 30 November 2024 (UTC)
- shouldn't we choose the option that minimize the harm to readers? That's what most companies/organization (idk what is the right term, sorry) do. ☆SuperNinja2☆ TALK! 21:54, 30 November 2024 (UTC)
- We already have. The "harm" to a person seeing such useful images in an encyclopedia is insignificant. The true harm is hiding information from those looking for it.--User:Khajidha (talk) (contributions) 21:19, 1 December 2024 (UTC)
- That is debatable. Emir of Wikipedia (talk) 21:38, 1 December 2024 (UTC)
The true harm is hiding information from those looking for it
- this is exactly what shoving these gore images in people's face does. ☆SuperNinja2☆ TALK! 03:46, 4 December 2024 (UTC)
- How does showing relevant information hide information?--User:Khajidha (talk) (contributions) 11:36, 4 December 2024 (UTC)
- the users will close the page once they see the images instead of reading the information they came for (happened with me with example above), and they will even avoid visiting Wikipedia after this bad experience. ☆SuperNinja2☆ TALK! 18:25, 11 December 2024 (UTC)
- We have no reason to try and coax sensitive users to our site by hiding things they don’t like. ꧁Zanahary꧂ 18:35, 11 December 2024 (UTC)
- the users will close the page once they see the images instead of reading the information they came for (happened with me with example above), and they will even avoid visiting Wikipedia after this bad experience. ☆SuperNinja2☆ TALK! 18:25, 11 December 2024 (UTC)
- How does showing relevant information hide information?--User:Khajidha (talk) (contributions) 11:36, 4 December 2024 (UTC)
- We already have. The "harm" to a person seeing such useful images in an encyclopedia is insignificant. The true harm is hiding information from those looking for it.--User:Khajidha (talk) (contributions) 21:19, 1 December 2024 (UTC)
- shouldn't we choose the option that minimize the harm to readers? That's what most companies/organization (idk what is the right term, sorry) do. ☆SuperNinja2☆ TALK! 21:54, 30 November 2024 (UTC)
- @Super ninja2 then those are users that we gladly do not want here. ValarianB (talk) 18:49, 11 December 2024 (UTC)
- Image censoring is a perennial proposal and really won't go anywhere. And given the topic of that page, I see no real option, since any other image will also be as disturbing. We do ask editors to use the principle of least astonishment, so that same image as the lede on corpse for example would be inappropriate, but not much can be done on that page. Masem (t) 21:51, 30 November 2024 (UTC)
- we can use a collapsible template, then that won't be censoring. ☆SuperNinja2☆ TALK! 21:55, 30 November 2024 (UTC)
- That type of suggestion is part of the perennial proposal on how to deal with such images. There's nothing that can be done to properly hide it. Masem (t) 22:05, 30 November 2024 (UTC)
- We already use collapsible templates for "long" lists, such as for BRICS members.While long lists are far less harmful, the goal was to avoid annoying readers and make them comfortable, encouraging them to read. This is also why we have templates like Template:Split—to make articles easier to navigate. Similarly, graphic images make readers extremely uncomfortable, not only discouraging them from reading a single article but sometimes deterring them from using Wikipedia altogether, which goes against the ideals of an encyclopedia.
- The fact that image censoring is a perennial proposal suggests it’s a problematic topic that many, if not most, editors find uncomfortable. I suspect the primary reason it hasn’t been adopted is the lack of consensus, not because half the community opposes it outright. I propose a solution that could satisfy both groups: a collapsible template. This approach wouldn’t censor anything but would minimize harm.
- Let’s focus on images that could provoke serious medical conditions and ignore the sexual and religiously offensive media for the time. Some readers may have heart conditions, PTSD, or other vulnerabilities, and we must also consider the families of deceased individuals whose photos we use. Additionally, while Wikipedia isn’t intended for children, they do use it, and we can’t ignore that reality.
- In summery, the potential harm caused by showing these images overrides any benefit to the project. And this solution would fix this by making Wikipedia a safer and more inclusive without censoring anything, which is the essential goal. ☆SuperNinja2☆ TALK! 22:28, 30 November 2024 (UTC)
- You've yet to show harm beyond you having a personal reaction to a picture that you didn't understand... an informative picture key to the article that I didn't even slightly flinch upon seeing. (If you have any records of Wikipedia images having provoked seizures, please put them forward.) Had you hidden it by collapsing, I might have assumed that there was something horrible that I wouldn't want to see and avoid getting that information. -- Nat Gertler (talk) 00:02, 1 December 2024 (UTC)
- I know Trypophobia has been the subject of discussion of a good lede that doesn't immediately illicit a problem to readers that have that fear. Masem (t) 00:22, 1 December 2024 (UTC)
- That article has had requests to remove or hide the image for about a decade now. WhatamIdoing (talk) 00:26, 1 December 2024 (UTC)
Had you hidden it by collapsing, I might have assumed that there was something horrible that I wouldn't want to see and avoid getting that information
- That would be your choice not to 'get that information.' However, forcing it on people who don't want to 'get it,' and risking a negative reaction as a result, is the real issue we should be concerned about
You've yet to show harm beyond you having a personal reaction to a picture that you didn't understand... an informative picture key to the article that I didn't even slightly flinch upon seeing
- That is your personal experience, but we know that at least one person had an anxiety attack from that image. As a community, it is our duty to prioritize the safety of our readers and choose the least risky option. ☆SuperNinja2☆ TALK! 13:47, 1 December 2024 (UTC)
- And you had the choice not to "get that information" that was in the picture.... you chose to go to the Wikipedia page about a disease. You claim to have been set off because it was
a deceased man smiling
... only the man wasn't deceased, he is described in the image's description as a "patient" which is not generally a term for a corpse. So what set you off was a man smiling. If you want us to police pictures based on information that you invent about them, it's hard to see how we don't have to police everything on your behalf. When it comes to safety of our viewers and medical-related images, an image can help them recognize the disease and may serve them well. The "least risky" option is simply not having Wikipedia. I hope we don't choose that path. If you think that Wikipedia provides as special danger to you, you are free not to use it. -- Nat Gertler (talk) 17:53, 1 December 2024 (UTC)- I don’t understand what you’re defending. You’re just complaining and criticizing my argument without demonstrating why leaving sensitive media as-is is a better option. Your argument essentially boils down to: “I don’t like your proposal,” which isn’t sufficient.
- Anyway, regardless of whether that man was dead or not, my point still stands.
The "least risky" option is simply not having Wikipedia.
- I don’t think that’s the goal of Wikipedia—to discourage its readers from using it. If the choice is “either read Wikipedia and risk having anxiety attacks or don’t read it at all,” then it’s clear the situation is bad and requires change. ☆SuperNinja2☆ TALK! 21:08, 1 December 2024 (UTC)
- So far, I know of one person claiming to have had a problem, and that's because he saw a picture of a man smiling. Hiding all pictures as not-obviously-problematic as that would basically mean hiding all pictures... and it's not just pictures that upset people, plenty of the text would have to be hidden under the same logic. (People might be freaked out by seeing that a ninja edits Wikipedia.) Folks have pointed you to the option that would let you turn off automatic image display for yourself, and if you wanted to make some argument that that should be a standard option, that may well be a supportable argument... but hiding everything that could possibly upset anyone would basically be hiding everything. -- Nat Gertler (talk) 21:30, 1 December 2024 (UTC)
- And you had the choice not to "get that information" that was in the picture.... you chose to go to the Wikipedia page about a disease. You claim to have been set off because it was
- I know Trypophobia has been the subject of discussion of a good lede that doesn't immediately illicit a problem to readers that have that fear. Masem (t) 00:22, 1 December 2024 (UTC)
Let’s focus on images that could provoke serious medical conditions and ignore the sexual and religiously offensive media for the time. ... And this solution would fix this by making Wikipedia a safer and more inclusive without censoring anything, which is the essential goal.
I think part of the reason why no consensus was ever reached on this issue is that the editors in favour of image filtering do not acknowledge that it inherently involves an infringement on intellectual freedom, and so don't put forward a framework for how to minimize the infringement. The approach can't be "Let's just create the functionality now and then worry later about what to do when a vocal minority of editors want to be able to hide all depictions of people with disabilities, or of LGBTQ+ people, because they find those images distressing." Those considerations need to be the starting point. I don't support image filtering, but when the discussion was held back in 2011 I did put foward a framework of seven principles for approaching it from this angle.--Trystan (talk) 17:05, 1 December 2024 (UTC)infringement on intellectual freedom
- Why do you guys want to go so technical and get things so complicated when the situation isn't at all complicated? Ppl dislike seeing gore, let them choose not to? Just like that, easy peasy. ☆SuperNinja2☆ TALK! 21:15, 1 December 2024 (UTC)
- Who defines what is "gore"? There's probably only a few types of images that we universally can say are problematic to a near majority of the world population (eg when you start to get into child exploitation), but beyond that, there's no way to tell when such an image would be considered bad by a majority of the readership. Masem (t) 21:18, 1 December 2024 (UTC)
- So you're basically presuming that this discussion is destined for failure because ppl have different povs on the topic? That's not a good enough argument. When did the community ever have similar povs on anything for that matter? ☆SuperNinja2☆ TALK! 02:10, 5 December 2024 (UTC)
- Don't want to see gore? Don't go to pages about gory things. Easy peasy.--User:Khajidha (talk) (contributions) 15:25, 2 December 2024 (UTC)
- Who defines what is "gore"? There's probably only a few types of images that we universally can say are problematic to a near majority of the world population (eg when you start to get into child exploitation), but beyond that, there's no way to tell when such an image would be considered bad by a majority of the readership. Masem (t) 21:18, 1 December 2024 (UTC)
- You've yet to show harm beyond you having a personal reaction to a picture that you didn't understand... an informative picture key to the article that I didn't even slightly flinch upon seeing. (If you have any records of Wikipedia images having provoked seizures, please put them forward.) Had you hidden it by collapsing, I might have assumed that there was something horrible that I wouldn't want to see and avoid getting that information. -- Nat Gertler (talk) 00:02, 1 December 2024 (UTC)
- That most certainly is censorship.--User:Khajidha (talk) (contributions) 21:20, 1 December 2024 (UTC)
- That type of suggestion is part of the perennial proposal on how to deal with such images. There's nothing that can be done to properly hide it. Masem (t) 22:05, 30 November 2024 (UTC)
any other image will also be as disturbing
that is what I'm arguing about. disturbing images should be collapsed at best. ☆SuperNinja2☆ TALK! 21:59, 30 November 2024 (UTC)- @Super ninja2, quite a lot of people agree with you, but a long time ago, this was formally proposed, and The Community™ rejected it. I have a lot of unhappy memories from that discussion, so you should not necessarily consider me to be an unbiased source {{(Redacted).
- The proposed approach was that a person should be able to say, in advance, that they personally don't want to see sexual images, disgusting medical images, violent images, or contested religious/cultural images, and have images tagged like that collapsed or screened somehow, with one click to reveal. The responses tended to cluster in two categories:
- Individuals should not have the freedom to control what they see, even if they are doing it for neutral reasons, like wanting to conserve bandwidth on a weak internet connection, or for safety reasons, like not wanting to risk an anxiety attack right now or not wanting to worry about the morality police looking over your shoulder at a public internet cafe. The Wikipedia editor has the right to put things on your computer screen, and your duty as a reader is to look at whatever disgusting, violent, or inappropriate image they want to shove in your face.
- It would be impossible to figure out which (few) images draw complaints. It might be impossible to do this with 100% accuracy, but we all know that the lead image at Smallpox draws complaints even though there's a FAQ at the top of the talk page to explain why it's there, every educated person knows that Depictions of Muhammad are both easily identifiable and considered inappropriate by some religious adherents, and most of us have encountered an animated gif that we'd like to cover up or turn off.
- I'm opposed to the first in principle and skeptical of the second. But that's the state of the discussion, and at this point, it will likely continue this way until multiple countries pass laws demanding that we change it. The Community™ has no empathy for people whose living situation is very different from their own. WhatamIdoing (talk) 00:10, 1 December 2024 (UTC)
- This context might help: Wikipedia was basically a spinoff from a now-defunct male-focused porn site. For years, every porn actress who was featured even once as a Playboy Playmate was automatically considered notable. If you infer from that fact something about the attitudes towards controversial content in the early days, I couldn't prove you wrong. WhatamIdoing (talk) 00:22, 1 December 2024 (UTC)
- Looking at the results on that page, it seems to say more people supported it than opposed it? Alpha3031 (t • c) 01:32, 1 December 2024 (UTC)
- There is one technically feasible solution I can come up with, although it may be complicated:
- Create a list of types of images that some will find offensive (anatomical parts typically not displayed in public, religiously offensive images, etc). Create a template to mark each type.
- Have the software mark these images, when used on other pages, in some way that scripts can use. Write scripts which individual users can self-apply to hide these images. Create a page with instructions for using these scripts, with a disclaimer that 100% results aren't guaranteed.
- These measures should be invisible to users not interested in them, except the tag on the image page. Animal lover |666| 10:59, 1 December 2024 (UTC)
- In some places a woman's hair is not typically displayed in public. Imagine if we had to hide every photo of a woman because her hair was visible, and we marked it with a template warning "Image of woman with visible hair". Valereee (talk) 18:59, 1 December 2024 (UTC)
- There is one technically feasible solution I can come up with, although it may be complicated:
not wanting to worry about the morality police looking over your shoulder at a public internet cafe.
- If you live in Saudi Arabia, Iran, or even less religious countries like Jordan, Morocco, or Egypt, and you were reading an article in a public place when a sexual photo deemed inappropriate popped up on your screen, you could literally be jailed! ☆SuperNinja2☆ TALK! 13:05, 1 December 2024 (UTC)
- And imagine if that photo was a depiction of Muhammad, then jail would be mercy. ☆SuperNinja2☆ TALK! 13:09, 1 December 2024 (UTC)
- Those might be valid points if these pictures were just inserted willy-nilly into any old page. But, for example, there is no reason NOT to expect an image of Muhammad on the Muhammad page (at least if you know that the site is not made entirely by Muslims). Articles about something having pictures of that something is not something you should be surprised by. Don't want people seeing what you are looking at? Don't do it in public. This is not hard.--User:Khajidha (talk) (contributions) 12:30, 2 December 2024 (UTC)
- Actually, these pictures (and pictures that haven't been tagged for censoring yet) can be inserted willy-nilly into any old page by vandals. We do try to catch and revert such edits, but there is no guarantee that articles will not contain completely inappropriate images (or text, or ASCII art). If something important like your freedom or livelihood depends on not looking at inappropriate content on Wikipedia in public, you should not look at any content on Wikipedia in public. —Kusma (talk) 20:18, 2 December 2024 (UTC)
- Those might be valid points if these pictures were just inserted willy-nilly into any old page. But, for example, there is no reason NOT to expect an image of Muhammad on the Muhammad page (at least if you know that the site is not made entirely by Muslims). Articles about something having pictures of that something is not something you should be surprised by. Don't want people seeing what you are looking at? Don't do it in public. This is not hard.--User:Khajidha (talk) (contributions) 12:30, 2 December 2024 (UTC)
- And imagine if that photo was a depiction of Muhammad, then jail would be mercy. ☆SuperNinja2☆ TALK! 13:09, 1 December 2024 (UTC)
- what a terribly sexist and racist comment, full of prejudiced assumptions about who might disagree with you. Fram (talk) 14:19, 1 December 2024 (UTC)
- Individuals already have control of what they see. They chose to come here. How can anyone seriously expect not to see images of such things in articles about these things? That's simply ridiculous.--User:Khajidha (talk) (contributions) 21:24, 1 December 2024 (UTC)
- we can use a collapsible template, then that won't be censoring. ☆SuperNinja2☆ TALK! 21:55, 30 November 2024 (UTC)
- See our Wikipedia:Content disclaimer. This isn't likely to be changed because you found an image that you objected too. There are ways for you to implement ways to not see images you don't want too, see WP:NOSEE. Specifically the section about the userscript that blocks all images unless you click to see them. Lee Vilenski (talk • contribs) 13:25, 1 December 2024 (UTC)
- no need to change the Content disclaimer because we will still display the offensive images but this time, the reader will choose to view them. ☆SuperNinja2☆ TALK! 14:04, 1 December 2024 (UTC)
- No, I'm not suggesting we change it. I'm suggesting that you read it and realise we aren't going to hide suitable images. Lee Vilenski (talk • contribs) 15:49, 1 December 2024 (UTC)
- no need to change the Content disclaimer because we will still display the offensive images but this time, the reader will choose to view them. ☆SuperNinja2☆ TALK! 14:04, 1 December 2024 (UTC)
- Let's not forget that WP:NOTCENSORED is a policy. - Ratnahastin (talk) 05:56, 2 December 2024 (UTC)
- The good of hiding disturbing or upsetting information, including images (which is real, and appropriate in many contexts) is completely incompatible with the good of presenting information in an educational and encyclopedic context, which is what we are doing on Wikipedia. Strongly oppose even a collapsible option or anything like it. ꧁Zanahary꧂ 19:32, 2 December 2024 (UTC)
- Blurring or collapsing that can be toggled off with a single click does not constitute censorship. Censorship would be only if images were removed or the users were somehow restricted from seeing them, e.g. by first forcing them to disclose their age or location. Giving everyone, including unregistered users, a reasonable default option to avoid inadvertently seeing explicit images is just a convenience feature in the user interface. This just follows from the principle of least astonishment, as most people expect to be warned before seeing sensitive content, and are used to that on other websites.
- Making Wikipedia more convenient for a large number of users is not equivalent to being forced to adhere to culturally contingent moral prohibitions. There is quite a distance between these two positions. NicolausPrime (talk) 02:38, 3 December 2024 (UTC)
- The reasonable default on an encyclopedia is that information is conveyed, not curtained. I’d counter your least astonishment argument with the fact that nobody is used to being warned about sensitive content in an encyclopedia. ꧁Zanahary꧂ 05:42, 3 December 2024 (UTC)
Very strong oppose on this one. Putting together a censor board to decide what is, could be, and/or is not offensive to whoever across the globe is a terrible idea, a waste of time, and does not help the site. WP:CENSOR is a crucial ingredient in Wikipedia's ability to cover everything under the sun. :bloodofox: (talk) 21:01, 1 December 2024 (UTC)
Oppose. Hurt feelings and thin skin are not a Wikipedia problem. Zaathras (talk) 04:27, 2 December 2024 (UTC)
- I recall encountering discussions about three photos on Wikipedia: profile photo of the pregnant Lina Medina, napalm girl, and Robert Peary's sunbathing inuit girlfriend Aleqasina. I believe that the napalm girl is the only one currently visible on Wikipedia. So WP:NOTCENSORED may be the stated policy, but doesn't sound like we're following it. Fabrickator (talk) 08:43, 2 December 2024 (UTC)
- There are other reasons a photo might be deleted. It could be under copyright, for instance. Valereee (talk) 13:33, 2 December 2024 (UTC)
- (replacing my erroneously entered response)
- The initial objection to the Aleqasina image was that it was "overtly exploitative pornography". This was objected to as a basis for removing the image. In response, someone removed the image on the basis that it was "a poor quality image compared to the other photos in the article." Fabrickator (talk) 16:40, 2 December 2024 (UTC)
- Is the photo at Commons, though? If not, it's possible the photo was removed from an article for that reason, but hasn't been put back under NOTCENSORED because it's not in the public domain. All of these photos could be less than 95 years old. Valereee (talk) 16:44, 2 December 2024 (UTC)
- FWIW, the photo in question is from 1896. Here is the applicable "fair use" notice:
Photo is available at commons:File:Mother of the seals.jpg. Fabrickator (talk) 18:07, 2 December 2024 (UTC)This media file is in the public domain in the United States. This applies to U.S. works where the copyright has expired, often because its first publication occurred prior to January 1, 1929, and if not then due to lack of notice or renewal.
- It's used on ruwiki. The discussion started out as a complaint from inexperienced editors that the photo was offensive, but that doesn't really seem to be what editors there removed it for. They didn't remove it because she's naked. It definitely is a low quality photo, even for the period. It definitely is a fair point that it doesn't add to the reader's understanding of Peary. I'm not sure this is censorship. To me it looks like someone complained it was offensive, other editors said "Why is this image in this article?", and there was discussion of whether removal constituted censorship. I think it could probably be included in Photos by Robert Peary or something. Valereee (talk) 19:09, 2 December 2024 (UTC)
- FWIW, the photo in question is from 1896. Here is the applicable "fair use" notice:
- If an image is not of real educational or encyclopedic value, then it being gratuitous pornography is a fine reason to exclude it. That is not censorship. ꧁Zanahary꧂ 19:35, 2 December 2024 (UTC)
- Is the photo at Commons, though? If not, it's possible the photo was removed from an article for that reason, but hasn't been put back under NOTCENSORED because it's not in the public domain. All of these photos could be less than 95 years old. Valereee (talk) 16:44, 2 December 2024 (UTC)
- Nothing against pictures of gore. But could we avoid seeing any images of this guy, who many people find very offensive? Martinevans123 (talk) 15:34, 2 December 2024 (UTC)
- I certainly understand that the person's opinions and actions are offensive, but is a mere picture of him that bad? Animal lover |666| 16:24, 2 December 2024 (UTC)
- The words "deeply disturbing, traumatizing, triggering, shocking, and sickening" spring to mind. But never mind. Martinevans123 (talk) 16:26, 2 December 2024 (UTC)
- Is a mere picture of a woman's (you name the body part, someone somewhere finds it offensive) that bad? Valereee (talk) 16:46, 2 December 2024 (UTC)
- I would not be opposed to an opt-in only tool or preferences setting or whatever that allows users to avoid seeing certain types of imagery. Would have to be entirely voluntary. I would imagine something that works by looking at an images categories could do it. Just Step Sideways from this world ..... today 20:44, 2 December 2024 (UTC)
- Is WP:NOSEE not enough? Valereee (talk) 20:50, 2 December 2024 (UTC)
- NOSEE, for all its value, requires the user (who may well be just a Wikipedia reader, not an editor) to install a script, a process that I suspect daunts some of those who are not tech-comfortable, if they even know that system exists. A "require-clicking-to-view-any-image" user option that can be turned on with just a switch would serve not just those who may be concerned about being offended or disturbed by an image, but also those for whom bandwidth may be limited or expensive, and it would be in the place where a user is likely to look for such a control.... but a "don't show offensive images" option would require a huge overhead of effort on the part of the editing base, to mark the existing images, to mark every new image, and to deal with the inevitable disagreements about which images should be marked. -- Nat Gertler (talk) 23:28, 2 December 2024 (UTC)
- Our license allows anyone to reuse our content and to filter images in any way they like. I expect that if there truly is a need for a Wikipedia version with certain censorship applied, someone will write a (possibly AI-powered) tool to deliver it. But I don't see hiding relevant information as something that could ever be part of Wikipedia's (or even Wikimedia's) mission. —Kusma (talk) 21:34, 2 December 2024 (UTC)
- Something like 17 years ago there was a child-friendly clone of WP that I made available on the computers at the elementary school where I worked. I don't know if there is anything like that around now. Donald Albury 21:55, 2 December 2024 (UTC)
- @Donald Albury::That was Wikipedia for Schools. I've never fleshed out a real proposal but the idea has been in my head for years to revive that idea, not as CD-ROMs but as a static fork of WP. A curated collection of WP articles, nothing sexually explicit but also not hosting articles on every single episode of Family Guy, and also no editing. A list would be created and maintained, a bot or something would import the articles and update them if they get major revisions, but no open editing. Schools can block the main Wikipedia altogether. They get a nice, clean kid-friendly WP and we get way less vandalism. I just don't know how to actually do any of that. Just Step Sideways from this world ..... today 22:42, 10 December 2024 (UTC)
- Something like 17 years ago there was a child-friendly clone of WP that I made available on the computers at the elementary school where I worked. I don't know if there is anything like that around now. Donald Albury 21:55, 2 December 2024 (UTC)
- Imagine the process involved in marking content as offensive or falling within certain categories. What is sacrilegious? What is pornographic? What is violent? What is disgusting? And why is it Wikipedia’s problem? ꧁Zanahary꧂ 23:00, 2 December 2024 (UTC)
- All of this was said more than a decade ago. I see nothing in this discussion that wasn't put forward by the opponents back then, from "NOTCENSORED gives me the right to force you see to see things you'd like to opt out of" to "whatabout this" to "we should prevent people from volunteering to do the necessary work". Apparently we haven't changed a bit. I am not really surprised. WhatamIdoing (talk) 23:26, 2 December 2024 (UTC)
"NOTCENSORED gives me the right to force you see to see things you'd like to opt out of"
-- I'm sorry, I can't find that quote in this discussion. If someone is actually putting forward that we should force people to look at Wikipedia, that's an editor we should be concerned about. -- Nat Gertler (talk) 23:33, 2 December 2024 (UTC)- So over a decade ago, this idea was rejected, and today people still reject it on the same basis. I’m not seeing the problem. ꧁Zanahary꧂ 01:08, 3 December 2024 (UTC)
- Nobody is forcing you to look at anything. You are the one who chose to visit this site. --User:Khajidha (talk) (contributions) 13:44, 3 December 2024 (UTC)
What is sacrilegious? What is pornographic? What is violent? What is disgusting?
Anything that would be considered WP:GRATUITOUS outside of encyclopedic use on Wikipedia. As evidenced by that content guideline, Wikipedia has been already using a notion of what content may be explicit for over a decade. Wikipedia also has been able to use its consensus processes to decide many other contentious and often outright controversial matters, such as WP:NPOV and WP:TITLE.And why is it Wikipedia’s problem?
It is Wikipedia's problem because a considerable portion of its readers expects this, as evidenced by this matter being discussed perennially. NicolausPrime (talk) 06:52, 3 December 2024 (UTC)- Unencyclopedic content shouldn’t be on Wikipedia to begin with. Offensive encyclopedic content should. Good luck with identifying the encyclopedic content that will and won’t offend anybody. ꧁Zanahary꧂ 08:48, 3 December 2024 (UTC)
It is Wikipedia's problem because a considerable portion of its readers expects this, as evidenced by this matter being discussed perennially.
Faced with the perennial problem of some users demanding warning labels on content they view as offensive, the collective response of the library profession over several decades has been to strongly oppose such systems due to the inherent infringement on intellectual freedom. From the American Library Association:Labeling as an attempt to prejudice attitudes is a censor’s tool.
There is an inherent non-neutralality in identifying groups of images that users may want to avoid. The image that started this discussion is a good example of that. It was mistakenly thought to be a dead body, but is in fact a person suffering from a disease. Identifying the appropriate categories to be warned against, and which images merit those warnings, is an exercise incompatible with free and open access to information.--Trystan (talk) 15:28, 3 December 2024 (UTC)- Sure. But contrast that with library selection policies (hmm, missing article – @The Interior, could I tempt you to write an article?) and collection development work. Libraries oppose putting labels like "this is an immoral book" on collection items. They've got no problem with putting an objective label like "pornography" on a collection item, nor any problem with deciding that they won't stock porn at all. WhatamIdoing (talk) 01:14, 4 December 2024 (UTC)
- With the vast arguments over whether, say, Gender Queer is pornography, it's hard to see it as objective. It's pretty much the Potter Stewart standard. -- Nat Gertler (talk) 01:58, 4 December 2024 (UTC)
- If a "pornography" label is a viewpoint-neutral directional aid intended to help interested users locate the resource, that would be valid. But not if it is intended to warn users away from the content:
7. Is it prejudicial to describe violent and sexual content? For example, would including "contains mild violence" on bibliographic record of a graphic novel violate the Library Bill of Rights? Yes, in any community, there will be a range of attitudes as to what is deemed offensive and contrary to moral values. Potential issues could be sexually explicit content, violence, and/or language. Including notes in the bibliographic record regarding what may be objectionable content assumes all members of the community hold the same values. No one person should take responsibility for judging what is offensive. Such voluntary labeling in bibliographic records and catalogs violates the Library Bill of Rights.
[3]--Trystan (talk) 02:04, 4 December 2024 (UTC)
- Sure. But contrast that with library selection policies (hmm, missing article – @The Interior, could I tempt you to write an article?) and collection development work. Libraries oppose putting labels like "this is an immoral book" on collection items. They've got no problem with putting an objective label like "pornography" on a collection item, nor any problem with deciding that they won't stock porn at all. WhatamIdoing (talk) 01:14, 4 December 2024 (UTC)
What is sacrilegious? What is pornographic? What is violent? What is disgusting? And why is it Wikipedia’s problem?
- Consensus would answer these questions.
- This is the main purpose of this discussion.
- ☆SuperNinja2☆ TALK! 04:01, 4 December 2024 (UTC)
- Just for logistical considerations, how many images are we talking about, and therefore how many consensus discussions, and how often could someone reopen to see if consensus had changed? I feel like there are a huge number of images that might upset someone, but very few that could get consensus for being hidden. Risus sardonicus averages 250+ views a day. The chance that image could ever gain consensus to be hidden is...well, in my mind, unlikely. But if even 1 in 100,000 people are freaked out enough and knowledgeable enough to start a discussion, we could be confirming that once a year via discussion at the talk. Valereee (talk) 13:26, 4 December 2024 (UTC)
- All of this was said more than a decade ago. I see nothing in this discussion that wasn't put forward by the opponents back then, from "NOTCENSORED gives me the right to force you see to see things you'd like to opt out of" to "whatabout this" to "we should prevent people from volunteering to do the necessary work". Apparently we haven't changed a bit. I am not really surprised. WhatamIdoing (talk) 23:26, 2 December 2024 (UTC)
I would imagine something that works by looking at an images categories could do it.
Subject categories serve a different function than warning labels, and the two functions are not compatible. A subject category about nudity should tag those images where nudity is central to the subject of the image (where it is defining), while a warning label would tag every single image containing any nudity, however trivial. Implementing image filtering that uses subject categories would distort the former into the latter. It would need to be a separate system. I agree with NatGertler above; it would be fine to introduce user-friendly functionality that hides all photos and lets user click to view based on the alt text. But flagging all images that someone, somewhere would object to is not a viable project.--Trystan (talk) 00:33, 3 December 2024 (UTC)- I’m reminded of the deleted Zionist symbol template on Commons, which was slapped all over images of Jewish stars in any context, including a chanukiah and some blue sugar cookies—which, no doubt, would be offensive images to some. ꧁Zanahary꧂ 00:57, 3 December 2024 (UTC)
- And the similar commons:Template:Chinese sensitive content. Simply: it becomes obvious that Wikipedia should not be working around people’s sensitivities as soon as you consider a common sensitivity that you consider silly or repressive. ꧁Zanahary꧂ 01:06, 3 December 2024 (UTC)
- This kind of "whataboutism" was addressed in the original report and recommendations. WhatamIdoing (talk) 01:53, 3 December 2024 (UTC)
- I recommend you try and imagine a position besides yours that isn’t fallacious or the result of an intellectual failure. Your approach is not a good one from the losing side of a debate. ꧁Zanahary꧂ 05:39, 3 December 2024 (UTC)
- I severely disagree with clarifying this as whataboutitsm. It's real, it will happen, we see it happening. —TheDJ (talk • contribs) 13:21, 9 December 2024 (UTC)
- Yes it is. No one mentioned that we would take similar approache to the Chinese and zionist templates. That's because we aren't going to hide zionist symbols or any other politically sensitive media.
- And if the problems encountered by these templates are worrying you, then plz explain them so we can address them and avoid them. ☆SuperNinja2☆ TALK! 06:20, 11 December 2024 (UTC)
- Raising an illustrative parallel is not "whataboutism"—its not even on the same spectrum as whataboutism. ꧁Zanahary꧂ 06:31, 11 December 2024 (UTC)
- This kind of "whataboutism" was addressed in the original report and recommendations. WhatamIdoing (talk) 01:53, 3 December 2024 (UTC)
- And the similar commons:Template:Chinese sensitive content. Simply: it becomes obvious that Wikipedia should not be working around people’s sensitivities as soon as you consider a common sensitivity that you consider silly or repressive. ꧁Zanahary꧂ 01:06, 3 December 2024 (UTC)
- I’m reminded of the deleted Zionist symbol template on Commons, which was slapped all over images of Jewish stars in any context, including a chanukiah and some blue sugar cookies—which, no doubt, would be offensive images to some. ꧁Zanahary꧂ 00:57, 3 December 2024 (UTC)
- I'd be happy to have a default turn on/turn off all images mode in preferences. But anything that requires judgement or consensus for which images or category of images? I'd object. Valereee (talk) 15:36, 3 December 2024 (UTC)
- Is WP:NOSEE not enough? Valereee (talk) 20:50, 2 December 2024 (UTC)
- I would not be opposed to an opt-in only tool or preferences setting or whatever that allows users to avoid seeing certain types of imagery. Would have to be entirely voluntary. I would imagine something that works by looking at an images categories could do it. Just Step Sideways from this world ..... today 20:44, 2 December 2024 (UTC)
- I certainly understand that the person's opinions and actions are offensive, but is a mere picture of him that bad? Animal lover |666| 16:24, 2 December 2024 (UTC)
- Agreed. Same with Sanctioned Suicide online forum. They removed its URL. ☆SuperNinja2☆ TALK! 02:19, 5 December 2024 (UTC)
- There are other reasons a photo might be deleted. It could be under copyright, for instance. Valereee (talk) 13:33, 2 December 2024 (UTC)
- I recall encountering discussions about three photos on Wikipedia: profile photo of the pregnant Lina Medina, napalm girl, and Robert Peary's sunbathing inuit girlfriend Aleqasina. I believe that the napalm girl is the only one currently visible on Wikipedia. So WP:NOTCENSORED may be the stated policy, but doesn't sound like we're following it. Fabrickator (talk) 08:43, 2 December 2024 (UTC)
- The simple answer: no. Long answer: The addition of a function to turn off images by default is a great idea that’s seemingly never been implemented despite its harmlessness and relative popularity, and is best taken up at some more technical-oriented forum. But we are never hiding/censoring graphic images if they serve a legitimate purpose. True, I don’t support graphic full color images of goatse on the Goatse.cx article per the Wikipedia:Principle of least astonishment and Wikipedia:GRATUITOUS, but the grey area here is very big and very grey. I’m not talking about the strawman arguments about “what if Dictator McTyrant in Dictatorstan bans pictures of goats” or something; here are some examples of things that could legitimately be considered objectionable to certain persons in a liberal Western society:
- Images or voices of deceased indigenous Australians
- Spiders
- Flashing/strobing lights
- Blackface imagery
But are we not allowed to illustrate Indigenous Australians, Spiders, Dennō Senshi Porygon, or Blackface then? Do we need warnings for these things? Do we need warnings for articles that simply discuss distressing content? These are actual, plausible issues people actually have had to address on other, equally serious platforms. But it’s literally impossible to address every conceivable issue, so Wikipedia’s longstanding policy is to simply address none of them (besides the bare minimum examples provided above). Dronebogus (talk) 03:52, 4 December 2024 (UTC)
But are we not allowed to illustrate Indigenous Australians, Spiders, Dennō Senshi Porygon, or Blackface then
- It’s up to the community to decide, and we’re all here to discuss this. What’s clear, however, is that we need to establish minimum criteria to guide us on what should be collapsed. We must draw a line to distinguish what can and cannot be collapsed.
- This isn’t a case where passing the proposal will lead to chaos and censorship, with everyone hiding images indiscriminately. We’ll be here to make the necessary adjustments and ensure it fits the community’s needs. That’s why we are here having this discussion, right? The proposal isn’t a rigid, unchangeable set of rules—it’s flexible and can adapt. Ultimately, consensus will determine what is acceptable enough to remain visible and what warrants collapsing. ☆SuperNinja2☆ TALK! 04:24, 4 December 2024 (UTC)
- You are completely missing my point. My line is not your line. Your line is not anybody else’s line. Your starting example doesn’t even come close to my, or really most people’s, lines. So you’re never going to establish a global minimum criterion here. And we shouldn’t allow people to establish local case-by-case criteria either— not only is that balkanization, it’s not going to get you what you want (medical editors have strong stomachs) Dronebogus (talk) 04:40, 4 December 2024 (UTC)
Your starting example doesn’t even come close to my, or really most people’s, lines.
What example?I never said that the "example" should be taken as a universal standard for deciding what should be collapsed. You don’t have to agree with me—or anyone else—for the proposal to work. Even if the majority decided that the "example" should not be collapsed, the process would still function. That's why discussions exist: to bring people with differing opinions together, negotiate and compromise, and form a rough consensus by analyzing what most people from both sides agree upon.- In any case, I mentioned that we would discuss what should be collapsed, and doctors and medical editors are welcome to share their perspectives like everyone else. I don’t understand your objection. ☆SuperNinja2☆ TALK! 07:53, 4 December 2024 (UTC)
- All I see here is you getting disturbed by a very particular image, wanting it collapsed, and then slowly backtracking to “well I actually just want this generally”. Basically the answer is still no. Dronebogus (talk) 17:53, 4 December 2024 (UTC)
- What is "this"? Anyways, you took it personally as it seems. And you just don't want to discuss the proposal, you're just complaining. ☆SuperNinja2☆ TALK! 02:27, 5 December 2024 (UTC)
- All I see here is you getting disturbed by a very particular image, wanting it collapsed, and then slowly backtracking to “well I actually just want this generally”. Basically the answer is still no. Dronebogus (talk) 17:53, 4 December 2024 (UTC)
My line is not your line. Your line is not anybody else’s line.
- I didn't even define the line. And I didn't say that the line has to agree with me. I only said "can we hide sensitive images?" we are supposed to draw that line together if the answer was yes. ☆SuperNinja2☆ TALK! 02:32, 5 December 2024 (UTC)
- Your line is, at least, defined at medical photos in which subjects appear to be deceased. ꧁Zanahary꧂ 04:08, 5 December 2024 (UTC)
- You are completely missing my point. My line is not your line. Your line is not anybody else’s line. Your starting example doesn’t even come close to my, or really most people’s, lines. So you’re never going to establish a global minimum criterion here. And we shouldn’t allow people to establish local case-by-case criteria either— not only is that balkanization, it’s not going to get you what you want (medical editors have strong stomachs) Dronebogus (talk) 04:40, 4 December 2024 (UTC)
True, I don’t support graphic full color images of goatse on the Goatse.cx article per the Wikipedia:Principle of least astonishment and Wikipedia:GRATUITOUS
.- Goatse.cx is a good example where Wikipedia's policies fall short on this matter. The Goatse shock image is encyclopedically relevant in that article, so WP:GRATUITOUS doesn't apply. WP:ASTONISH also doesn't seem convincing for preventing its inclusion, given that Wikipedia does include explicit content like defecation or feces in other appropriate articles, whereas there is also a fair number of users may expect that shock image to be there anyway, so not including it at all may be in fact against WP:ASTONISH.
- If you look at the closing rationale for the ultimate deletion of this image, it is stated there that the only accepted reason why it was deleted was because it had unsuitable copyright status. [4] So were the Goatse shock image licensed under a free license, there would be no basis in policy to keep it out of its article's reader sight.
- NicolausPrime (talk) 04:39, 4 December 2024 (UTC)
- I don’t really get how a picture of a man stretching his anus is really necessary to understand the concept of a shock site depicting a man stretching his anus. I’d say it is gratuitous because it doesn’t improve the viewer’s understanding. A better example I guess would be something like Coprophilia which has no graphic full-color photographs (or even graphically explicit illustrations) of people… engaging in it because it would not improve understanding of the topic and would just disgust 99% of the population. Dronebogus (talk) 04:45, 4 December 2024 (UTC)
- Seeing what the famous shock image really looked like very much increases the person's understanding of the subject. Words can convey only small parts of audiovisual content. And generally, showing the image in an article about it is helpful for people who may recognize it but not remember its name. For example, in the Lenna article I wouldn't have realized that I know this image if it wasn't shown there. NicolausPrime (talk) 05:03, 4 December 2024 (UTC)
- I agree. It should be added! ꧁Zanahary꧂ 08:33, 4 December 2024 (UTC)
- I think this is getting off topic. If you really need to see Kirk Johnson’s butthole then you should take that up at the article. This is just starting to remind me of the “I’m a visual learner” meme. Dronebogus (talk) 17:58, 4 December 2024 (UTC)
- I agree. It should be added! ꧁Zanahary꧂ 08:33, 4 December 2024 (UTC)
- Seeing what the famous shock image really looked like very much increases the person's understanding of the subject. Words can convey only small parts of audiovisual content. And generally, showing the image in an article about it is helpful for people who may recognize it but not remember its name. For example, in the Lenna article I wouldn't have realized that I know this image if it wasn't shown there. NicolausPrime (talk) 05:03, 4 December 2024 (UTC)
- Another example: Nudity has relatively few explicit images despite the subject (most of them would be considered PG-13 by American standards) because it’s mostly discussing the societal context of nudity. There are more explicit anatomical photographs on anatomy pages because those discuss biological aspects of humans that cannot be illustrated without showing the entire unclothed body. Dronebogus (talk) 04:51, 4 December 2024 (UTC)
- I don’t really get how a picture of a man stretching his anus is really necessary to understand the concept of a shock site depicting a man stretching his anus. I’d say it is gratuitous because it doesn’t improve the viewer’s understanding. A better example I guess would be something like Coprophilia which has no graphic full-color photographs (or even graphically explicit illustrations) of people… engaging in it because it would not improve understanding of the topic and would just disgust 99% of the population. Dronebogus (talk) 04:45, 4 December 2024 (UTC)
- I think this proposal is going nowhere extremely fast. It’s already been discussed. The answer is no. The reason is it fundamentally conflicts with WP:CENSOR and WP:NEUTRAL. On top of that the vast majority of people don’t support it and the few that do haven’t provided any kind of extraordinary argument necessary to overcome such a longstanding consensus built on a foundation of hard policy. Some uninvolved admin should shut it down. Dronebogus (talk) 21:45, 5 December 2024 (UTC)
- why are you angry? if you're bothered from this discussion you can just opt out. You already gave your opinion anyways, so you can leave with a clear conscience if you're bothered from us so much. but why do you want to shut us down? we didn't finish. ☆SuperNinja2☆ TALK! 02:45, 6 December 2024 (UTC)
- If you don’t want people to react strongly don’t make a controversial proposal, that’s been talked to death, that obviously runs counter to several core principles of Wikipedia. And there is no “us”; there’s you and WhatamIdoing (equally unconvincing and leaning on accusations of prejudice against women and nonwhite people or something like that) vs. everyone and years if not decades of policy and precedent, plus the de facto policy of WP:SNOW— proposals with no realistic chance of success do not have to be prolonged indefinitely. I’d like to add that none of this is personal— I am sorry if you encounter content that deeply upsets you, but I cannot support any kind of official mitigation policy for this issue on both a practical and philosophical basis. Dronebogus (talk) 08:20, 6 December 2024 (UTC)
- @Super ninja2, some editors will think it's a bit of a time-waster to bring up a perennial suggestion unless you either have a new solution or have some reason to believe consensus might have changed. You didn't suggest either of those in your original post. And the reason some editors may feel they have to go ahead and waste their time on it is that if enough people don't, the person making the perennial suggestion may assume lack of opposition is evidence consensus has changed. So, yeah, you may encounter some expressions of annoyance when people feel like they're obligated to waste their time addressing -- again -- this perennial suggestion. Valereee (talk) 13:26, 6 December 2024 (UTC)
- why are you angry? if you're bothered from this discussion you can just opt out. You already gave your opinion anyways, so you can leave with a clear conscience if you're bothered from us so much. but why do you want to shut us down? we didn't finish. ☆SuperNinja2☆ TALK! 02:45, 6 December 2024 (UTC)
- Strong support for asking the WMF to expand Help:NOSEE tools to make it easier for readers to hide content they don't want to see. Right now a reader can (if they create an account and read logged-in) take steps like installing a script, or modifying their CSS page, to hide all images (until clicked on) or images on specific pages, or specific images on any page. This is nice, but it'd be relatively easy to make things much better. Hiding images could be a simple toggle switch like V22's light/dark modes. Wikipedia could do what like the entire rest of the internet has done and have "SafeSearch"-type features where readers can choose from "unfiltered," "medium filter", "full filter", like the parental controls or content filtering features we're all familiar with thanks to its ubiquity in other software/websites. There are lots of reasons readers might want to hide certain types of content (violence, sexuality), e.g. child protection, religion, gov't, PTSD, or just not wanting to see that kind of stuff. The technology to accommodate such readers is readily at hand and widely used on the internet. Refusing to do so seems stubborn, like imposing editors' morality on readers. We should ask the WMF to implement "the usual" content filtering capabilities, a la Google's SafeSearch. Levivich (talk) 21:22, 6 December 2024 (UTC)
- No, your suggestion is “imposing morality” on readers. We cannot make arbitrary decisions about what constitutes “offense/triggering” content. I’m not going over examples ad nauseam. And this isn’t an RFC and never will be, so your “vote” is inapplicable. I actually support making it easy to hide all images by default, but that’s a purely technical matter as I already said. Dronebogus (talk) 12:59, 8 December 2024 (UTC)
- Wikipedia is not and should not be like Google Search. ꧁Zanahary꧂ 15:35, 8 December 2024 (UTC)
- No, of course not, but that’s not really the point of what we’re discussing here. What I mean is, we should consider the measures Google has implemented for their users aged 18 and above to make navigation easier, prioritize user safety, and comply with legal requirements. By using the said examples as a comparison point—and narrowing it down further if needed—we can learn from their experience. We could see how that has worked for them.
- Now, why would that be an issue? Google’s a big company with lots of experts and experience in keeping their huge user base safe and comfortable on their platform. There’s no harm in seeing what they’ve achieved. we could gain useful insights and it would help us with this discussion.
- In the same way, the wiki community should look at how to create a safer and more welcoming environment. This would help users feel comfortable engaging with the platform and encourage them to actually make use of the information they came for (like with Google) ☆SuperNinja2☆ TALK! 18:12, 10 December 2024 (UTC)
- If comfort is at odds with encyclopedically relevant information, we choose the latter, because we are an encyclopedia. ꧁Zanahary꧂ 19:46, 10 December 2024 (UTC)
- If ppl are not comfortable using the platform and don't want to use it and can't access it , then what's the point of having the information in the first place? ☆SuperNinja2☆ TALK! 22:08, 10 December 2024 (UTC)
- To deliver information to people who aren't afraid of it. ꧁Zanahary꧂ 22:33, 10 December 2024 (UTC)
- That is not mentioned in any place in Wikipedia's policies ☆SuperNinja2☆ TALK! 06:23, 11 December 2024 (UTC)
- Because it is so basic it doesn't need to be spelled out.--User:Khajidha (talk) (contributions) 11:35, 11 December 2024 (UTC)
- lol, no. Levivich (talk) 15:05, 11 December 2024 (UTC)
- Because it is so basic it doesn't need to be spelled out.--User:Khajidha (talk) (contributions) 11:35, 11 December 2024 (UTC)
- That is not mentioned in any place in Wikipedia's policies ☆SuperNinja2☆ TALK! 06:23, 11 December 2024 (UTC)
- To deliver information to people who aren't afraid of it. ꧁Zanahary꧂ 22:33, 10 December 2024 (UTC)
- If ppl are not comfortable using the platform and don't want to use it and can't access it , then what's the point of having the information in the first place? ☆SuperNinja2☆ TALK! 22:08, 10 December 2024 (UTC)
- If comfort is at odds with encyclopedically relevant information, we choose the latter, because we are an encyclopedia. ꧁Zanahary꧂ 19:46, 10 December 2024 (UTC)
- The "entire rest of the internet" is not an encyclopedia. --User:Khajidha (talk) (contributions) 12:56, 9 December 2024 (UTC)
- britanica is ☆SuperNinja2☆ TALK! 18:13, 10 December 2024 (UTC)
- I've started a follow-up discussion of opt-in image hiding at Wikipedia:Village_pump_(idea_lab)#Opt-in_content_warnings_and_image_hiding. – Joe (talk) 07:34, 11 December 2024 (UTC)
- We are literally having established users dropping “lol nope” as a rebuttal. Could someone please just close this timesink already? Dronebogus (talk) 18:12, 11 December 2024 (UTC)
- User:Simonm223 but I was preparing a draft which could have helped a lot in making a consensus if you gave me some time. This draft is supposed to point at the points that most users agree on. And propose fixes to the points they don't agree on. This draft would organize the whole chaotic discussion into a neat bullet points and get it back to an understandable route rather than this chaotic fights.
Can you give me a chance to finish it? I know it looks chaotic but I need few days to make it work not more. ☆SuperNinja2☆ TALK! 12:44, 13 December 2024 (UTC)
- Honestly I doubt another post was going to change anyone's mind. The topic was going in circles and more than one person asked for a close. I'd very gently suggest you might be whipping an expired equine.Simonm223 (talk) 12:50, 13 December 2024 (UTC)
- No, this draft was going to break this circle and summarize the whole discussion into an organized bullet points and sift users' opinions and debate each argument independently. ☆SuperNinja2☆ TALK! 12:59, 13 December 2024 (UTC)
- Which would have taken this discussion into a different track. ☆SuperNinja2☆ TALK! 13:01, 13 December 2024 (UTC)
- No, this draft was going to break this circle and summarize the whole discussion into an organized bullet points and sift users' opinions and debate each argument independently. ☆SuperNinja2☆ TALK! 12:59, 13 December 2024 (UTC)
- Honestly I doubt another post was going to change anyone's mind. The topic was going in circles and more than one person asked for a close. I'd very gently suggest you might be whipping an expired equine.Simonm223 (talk) 12:50, 13 December 2024 (UTC)
LLM/chatbot comments in discussions
[edit]
|
Should admins or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots? 00:12, 2 December 2024 (UTC)
I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner. Just Step Sideways from this world ..... today 00:12, 2 December 2024 (UTC)
- Seems reasonable, as long as the GPTZero (or any tool) score is taken with a grain of salt. GPTZero can be as wrong as AI can be. ~ ToBeFree (talk) 00:32, 2 December 2024 (UTC)
- Only if the false positive and false negative rate of the tool you are using to detect LLM content is very close to zero. LLM detectors tend to be very unreliable on, among other things, text written by non-native speakers. Unless the tool is near perfect then it's just dismissing arguments based on who wrote them rather than their content, which is not what we do or should be doing around here. Thryduulf (talk) 00:55, 2 December 2024 (UTC)
- In the cases I have seen thusfar it's been pretty obvious, the tools have just confirmed it. Just Step Sideways from this world ..... today 04:08, 2 December 2024 (UTC)
- The more I read the comments from other editors on this, the more I'm a convinced that implementing either this policy or something like it will bring very significant downsides on multiple fronts that significantly outweigh the small benefits this would (unreliably) bring, benefits that would be achieved by simply reminding closers to disregard comments that are unintelligible, meaningless and/or irrelevant regardless of whether they are LLM-generated or not. For the sake of the project I must withdraw my previous very qualified support and instead very strongly oppose. Thryduulf (talk) 02:45, 3 December 2024 (UTC)
- I think it should be an expressly legitimate factor in considering whether to discount or ignore comments either if it's clear enough by the text or if the user clearly has a history of using LLMs. We wouldn't treat a comment an editor didn't actually write as an honest articulation of their views in lieu of site policy in any other situation. Remsense ‥ 论 00:59, 2 December 2024 (UTC)
- I would have already expected admins to exercise discretion in this regard, as text written by an LLM is not text written by a person. We cannot guarantee it is what the person actually means, especially as it is a tool often used by those with less English proficiency, which means perhaps they cannot evaluate the text themselves. However, I do not think we can make policy about a specific LLM or tool. The LLM space is moving fast, en.wiki policies do not. Removal seems tricky, I would prefer admins exercise discretion instead, as they do with potentially canvassed or socked !votes. CMD (talk) 01:06, 2 December 2024 (UTC)
- Support discounting or collapsing AI-generated comments, under slightly looser conditions than those for human comments. Not every apparently-AI-generated comment is useless hallucinated nonsense – beyond false positives, it's also possible for someone to use an AI to help them word a constructive comment, and make sure that it matches their intentions before they publish it. But in my experience, the majority of AI-generated comments are somewhere between "pointless" and "disruptive". Admins should already discount clearly insubstantial !votes, and collapse clearly unconstructive lengthy comments; I think we should recognize that blatant chatbot responses are more likely to fall into those categories. jlwoodwa (talk) 02:11, 2 December 2024 (UTC)
- Strongly Support - I think some level of human judgement on the merits of the argument are necessary, especially as GPTZero may still have a high FPR. Still, if the discussion is BLUDGEONy, or if it quacks like an AI-duck, looks like an AI-duck, etc, we should consider striking out such content.- sidenote, I'd also be in favor of sanctions against users who overuse AI to write out their arguments/articles/etc. and waste folks time on here.. Bluethricecreamman (talk) 02:20, 2 December 2024 (UTC)
- On a wording note, I think any guidance should avoid referring to any specific technology. I suggest saying "... to have been generated by a program". isaacl (talk) 02:54, 2 December 2024 (UTC)
- "generated by a program" is too broad, as that would include things like speech-to-text. Thryduulf (talk) 03:08, 2 December 2024 (UTC)
- Besides what Thryduulf said, I think we should engage with editors who use translators. Aaron Liu (talk) 03:45, 2 December 2024 (UTC)
- A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. isaacl (talk) 03:57, 2 December 2024 (UTC)
- Taking Google Translate as an example, most of the basic stuff uses "AI" in the sense of machine learning (example) but they absolutely use LLMs nowadays, even for the basic free product. Gnomingstuff (talk) 08:39, 2 December 2024 (UTC)
- A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. isaacl (talk) 03:57, 2 December 2024 (UTC)
- Support. We already use discretion in collapsing etc. comments by SPAs and suspected socks, it makes sense to use the same discretion for comments suspected of being generated by a non-human. JoelleJay (talk) 03:07, 2 December 2024 (UTC)
- Support - Someone posting "here's what ChatGPT has to say on the subject" can waste a lot of other editors' time if they feel obligated to explain why ChatGPT is wrong again. I'm not sure how to detect AI-written text but we should take a stance that it isn't sanctioned. Clayoquot (talk | contribs) 04:37, 2 December 2024 (UTC)
- Strong Support - I've never supported using generative AI in civil discourse. Using AI to participate in these discussions is pure laziness, as it is substituting genuine engagement and critical thought with a robot prone to outputting complete garbage. In my opinion, if you are too lazy to engage in the discussion yourself, why should we engage with you? Lazman321 (talk) 05:26, 2 December 2024 (UTC)
- Comment - I'm skeptical that a rule like this will be enforceable for much longer. Sean.hoyland (talk) 05:39, 2 December 2024 (UTC)
- Why? Aaron Liu (talk) 12:22, 2 December 2024 (UTC)
- Because it's based on a potentially false premise that it will be possible to reliably distinguish between text generated by human biological neural networks and text generated by non-biological neural networks by observing the text. It is already quite difficult in many cases, and the difficulty is increasing very rapidly. I have your basic primate brain. The AI companies building foundation models have billions of dollars, tens of thousands, soon to be hundreds of thousands of GPUs, a financial incentive to crack this problem and scaling laws on their side. So, I have very low credence in the notion that I will be able to tell whether content is generated by a person or a person+LLM or an AI agent very soon. On the plus side, it will probably still be easy to spot people making non-policy based arguments regardless of how they do it. Sean.hoyland (talk) 13:52, 2 December 2024 (UTC)
- ...and now that the systems are autonomously injecting their output back into model via chain-of-thought prompting, or a kind of inner monologue if you like, to respond to questions, they are becoming a little bit more like us. Sean.hoyland (talk) 14:14, 2 December 2024 (UTC)
- A transformer (deep learning architecture) is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. Flounder fillet (talk) 17:20, 2 December 2024 (UTC)
- We shall see. You might want to update yourself on their ability to do math and write non-trivial code. Things are changing very quickly. Either way, it is not currently possible to say much about what LLMs are actually doing because mechanistic interpretability is in its infancy. Sean.hoyland (talk) 03:44, 3 December 2024 (UTC)
- You might be interested in Anthropic's 'Mapping the Mind of a Large Language Model' and Chris Olah's work in general. Sean.hoyland (talk) 04:02, 3 December 2024 (UTC)
- A transformer (deep learning architecture) is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. Flounder fillet (talk) 17:20, 2 December 2024 (UTC)
- Why? Aaron Liu (talk) 12:22, 2 December 2024 (UTC)
- Support and I would add "or similar technologies" to "AI/LLM/Chatbots". As for Sean.hoyland's comment, we will cross that bridge when we get to it. Cullen328 (talk) 05:51, 2 December 2024 (UTC)
- ...assuming we can see the bridge and haven't already crossed it. Sean.hoyland (talk) 06:24, 2 December 2024 (UTC)
- Support - All editors should convey their thoughts in their own words. AI generated responses and comments are disruptive because they are pointless and not meaningful. - Ratnahastin (talk) 06:04, 2 December 2024 (UTC)
- Support, I already more or less do this. An LLM generated comment may or may not actually reflect the actual thoughts of the editor who posted it, so it's essentially worthless toward a determination of consensus. Since I wrote this comment myself, you know that it reflects my thoughts, not those of a bot that I may or may not have reviewed prior to copying and pasting. Seraphimblade Talk to me 06:59, 2 December 2024 (UTC)
- Strong oppose. Let me say first that I do not like ChatGPT. I think it has been a net negative for the world, and it is by nature a net negative for the physical environment. It is absolutely a net negative for the encyclopedia if LLM-generated text is used in articles in any capacity. However, hallucinations are less of an issue on talk pages because they're discussions. If ChatGPT spits out a citation of a false policy, then obviously that comment is useless. If ChatGPT spits out some boilerplate "Thanks for reviewing the article, I will review your suggestions and take them into account" talk page reply, who gives a fuck where it came from? (besides the guys in Texas getting their eardrums blown out because they live by the data center)The main reason I oppose, though, is because banning LLM-generated comments is difficult to enforce bordering on unenforceable. Most studies show that humans are bad at distinguishing AI-generated text from text generated without AI. Tools like GPTZero claims a 99% accuracy rate, but that seems dubious based on reporting on the matter. The news outlet Futurism (which generally has an anti-AI slant) has failed many times to replicate that statistic, and anecdotal accounts by teachers, etc. are rampant. So we can assume that we don't know how capable AI detectors are, that there will be some false positives, and that striking those false positives will result in WP:BITING people, probably newbies, younger people more accustomed to LLMs, and non-Western speakers of English (see below).There are also technological issues as play. It'd be easy if there was a clean line between "totally AI-generated text" and "totally human-generated text," but that line is smudged and well on its way to erased. Every tech company is shoving AI text wrangling into their products. This includes autocomplete, translation, editing apps, etc. Should we strike any comment a person used Grammarly or Google Translate for? Because those absolutely use AI now.And there are also, as mentioned above, cultural issues. The people using Grammarly, machine translation, or other such services are likely to not have English as their first language. And a lot of the supposed "tells" of AI-generated content originate in the formal English of other countries -- for instance, the whole thing where "delve" was supposedly a tell for AI-written content until people pointed out the fact that lots of Nigerian workers trained the LLM and "delve" is common Nigerian formal English.I didn't use ChatGPT to generate any of this comment. But I am also pretty confident that if I did, I could have slipped it in and nobody would have noticed until this sentence. Gnomingstuff (talk) 08:31, 2 December 2024 (UTC)
- Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? Seraphimblade Talk to me 20:11, 2 December 2024 (UTC)
- Yeah, Gnoming's writing style is far more direct and active than GPT's. Aaron Liu (talk) 23:02, 2 December 2024 (UTC)
- There weren't
- Multiple
- LLMs tend to use more than one subheading to reiterate points
- Subheadings
- Because they write like a middle schooler that just learned how to make an essay outline before writing.
- Multiple
- In conclusion, they also tend to have a conclusion paragraph for the same reason they use subheadings. ScottishFinnishRadish (talk) 13:56, 3 December 2024 (UTC)
- There weren't
- Yeah, Gnoming's writing style is far more direct and active than GPT's. Aaron Liu (talk) 23:02, 2 December 2024 (UTC)
- Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? Seraphimblade Talk to me 20:11, 2 December 2024 (UTC)
- Support - Ai-generated comments are WP:DISRUPTIVE - An editor who has an argument should not use ChatGPT to present it in an unnecessarily verbose manner, and an editor who doesn't have one should not participate in discussion. Flounder fillet (talk) 13:14, 2 December 2024 (UTC)
- Yes but why do we need this common sense RFC/policy/whatever? Just ban these people. If they even exist. Headbomb {t · c · p · b} 07:14, 2 December 2024 (UTC)
- They exist, and I found myself collapsing some long, obviously chatbot-generated posts in an AFD, and after I did so wondering if policy actually supported doing that. I couldn't find anything so here we are. Just Step Sideways from this world ..... today 20:04, 2 December 2024 (UTC)
- Yes, of course, and I know that's the right answer because ChatGPT agrees with me.
What ChatGPT thinks
|
---|
|
- In keeping with the proposed guideline, I have of course collapsed the above AI-generated content. (Later: It's actually worth reading in the context of this discussioin, so I've unhidden it by default.) But I must confess it's a pretty good analysis, and worth reading. EEng 07:47, 2 December 2024 (UTC)
- This is absolute gold dust and the best contribution to this discussion so far. There is an enormous irony here, one that might not be immediately obvious. The proposal is that we should ignore or even strike these type of contributions, but personally it seems like the collapsed format has worked a charm here. I really don't think that AI has much to contribute to WP discussions generally, but with the right prompt, there is certainly something worth adding to the conversation in reality. CNC (talk) 20:23, 8 December 2024 (UTC)
- The proposal also includes collapsing. jlwoodwa (talk) 20:26, 8 December 2024 (UTC)
- Thanks, I completely missed that. Trying to speed read is not my forte. CNC (talk) 20:32, 8 December 2024 (UTC)
- The proposal also includes collapsing. jlwoodwa (talk) 20:26, 8 December 2024 (UTC)
- This is absolute gold dust and the best contribution to this discussion so far. There is an enormous irony here, one that might not be immediately obvious. The proposal is that we should ignore or even strike these type of contributions, but personally it seems like the collapsed format has worked a charm here. I really don't think that AI has much to contribute to WP discussions generally, but with the right prompt, there is certainly something worth adding to the conversation in reality. CNC (talk) 20:23, 8 December 2024 (UTC)
- The "detector" website linked in the opening comment gives your chatbot's reply only an 81% chance of being AI-generated. WhatamIdoing (talk) 23:36, 2 December 2024 (UTC)
- That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) EEng 03:07, 3 December 2024 (UTC)
- What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. WhatamIdoing (talk) 03:28, 4 December 2024 (UTC)
- 100% is, of course, an impossible goal. Certainly SPI doesn't achieve that, so why demand it here? EEng 22:31, 4 December 2024 (UTC)
- What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. WhatamIdoing (talk) 03:28, 4 December 2024 (UTC)
- That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) EEng 03:07, 3 December 2024 (UTC)
Strong Oppose I support the concept of removal of AI-generated content in theory. However, we do not have the means to detect such AI-generated content. The proposed platform that we may use (GPTZero) is not reliable for this purpose. In fact, our own page on GPTZero has a section citing several sources stating the problem with this platform's accuracy. It is not helpful to have a policy that is impossible to enforce. ThatIPEditor They / Them 08:46, 2 December 2024 (UTC)- Strong Support To be honest, I am surprised that this isn't covered by an existing policy. I oppose the use of platforms like GPTZero, due to it's unreliability, but if it is obviously an ai-powered-duck (Like if it is saying shit like "as an AI language model...", take it down and sanction the editor who put it up there. ThatIPEditor They / Them 08:54, 2 December 2024 (UTC)
- Support at least for WP:DUCK-level AI-generated comments. If someone uses a LLM to translate or improve their own writing, there should be more leeway, but something that is clearly a pure ChatGPT output should be discounted. Chaotic Enby (talk · contribs) 09:17, 2 December 2024 (UTC)
- I agree for cases in which it is uncontroversial that a comment is purely AI-generated. However, I don't think there are many cases where this is obvious. The claim that gptzero and other such tools are very good at detecting this is false. Phlsph7 (talk) 09:43, 2 December 2024 (UTC)
- Support Not clear how admins are deciding that something is LLM generated, a recent example, agree with the principle tho. Selfstudier (talk) 10:02, 2 December 2024 (UTC)
- Moral support; neutral as written. Chatbot participation in consensus discussions is such an utterly pointless and disdainful abuse of process and community eyeballs that I don't feel like the verbiage presented goes far enough. Any editor may hat LLM-generated comments in consensus discussions is nearer my position. No waiting for the closer, no mere discounting, no reliance on the closer's personal skill at recognising LLM output, immediate feedback to the editor copypasting chatbot output that their behaviour is unwelcome and unacceptable. Some observations:I've seen editors accused of using LLMs to generate their comments probably about a dozen times, and in all but two cases – both at dramaboards – the chatbot prose was unmistakably, blindingly obvious. Editors already treat non-obvious cases as if written by a human, in alignment with the raft of
only if we're sure
caveats in every discussion about LLM use on the project.If people are using LLMs to punch up prose, correct grammar and spelling, or other superficial tasks, this is generally undetectable, unproblematic, and not the point here.Humans are superior to external services at detecting LLM output, and no evidence from those services should be required for anything.As a disclosure, evidence mounts that LLM usage in discussions elicits maximally unkind responses from me. It just feels so contemptuous, to assume that any of us care what a chatbot has to say about anything we're discussing, and that we're all too stupid to see through the misattribution because someone tacked on a sig and sometimes an introductory paragraph. And I say this as a stupid person. Folly Mox (talk) 11:20, 2 December 2024 (UTC)- Looks like a rewrite is indicated to distinguish between machine translation and LLM-generated comments, based on what I'm seeing in this thread. Once everyone gets this out of our system and an appropriately wordsmithed variant is reintroduced for discussion, I preemptively subpropose the projectspace shortcut WP:HATGPT. Folly Mox (talk) 15:26, 8 December 2024 (UTC)
- Support per EEng charlotte 👸♥ 14:21, 2 December 2024 (UTC)
- I would be careful here, as there are tools that rely on LLM AI that help to improve the clarity of one's writing, and editors may opt to use those to parse their poor writing (perhaps due to ESL aspects) to something clear. I would agree content 100% generated by AI probably should be discounted particularly if from an IP or new editors (hints if socking or meat puppetry) but not all cases where AI has come into play should be discounted — Masem (t) 14:19, 2 December 2024 (UTC)
- Support, cheating should have no place or take its place in writing coherent comments on Wikipedia. Editors who opt to use it should practice writing until they rival Shakespeare, or at least his cousin Ned from across the river, and then come back to edit. Randy Kryn (talk) 14:29, 2 December 2024 (UTC)
- Support atleast for comments that are copied straight from the LLM . However, we should be more lenient if the content is rephrased by non-native English speakers due to grammar issues The AP (talk) 15:10, 2 December 2024 (UTC)
- Support for LLM-generated content (until AI is actually intelligent enough to create an account and contribute on a human level, which may eventually happen). However, beware of the fact that some LLM-assisted content should probably be allowed. An extreme example of this: if a non-native English speaker were to write a perfectly coherent reason in a foreign language, and have an LLM translate it to English, it should be perfectly acceptable. Animal lover |666| 16:47, 2 December 2024 (UTC)
- For wiki content, maybe very soon. 'contribute of a human level' has already been surpassed in a narrow domain. Sean.hoyland (talk) 17:08, 2 December 2024 (UTC)
- If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. Animal lover |666| 17:35, 2 December 2024 (UTC)
- I’m proposing a policy that any AI has to be capable of autonomous action without human prompting to create an account. Dronebogus (talk) 21:38, 5 December 2024 (UTC)
- If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. Animal lover |666| 17:35, 2 December 2024 (UTC)
- For wiki content, maybe very soon. 'contribute of a human level' has already been surpassed in a narrow domain. Sean.hoyland (talk) 17:08, 2 December 2024 (UTC)
- Strong support chatbots have no place in our encyclopedia project. Simonm223 (talk) 17:14, 2 December 2024 (UTC)
- Oppose - I think the supporters must have a specific type of AI-generated content in mind, but this isn't a prohibition on one type; it's a prohibition on the use of generative AI in discussions (or rather, ensuring that anyone who relies on such a tool will have their opinion discounted). We allow people who aren't native English speakers to contribute here. We also allow people who are native English speakers but have difficulty with language (but not with thinking). LLMs are good at assisting both of these groups of people. Furthermore, as others pointed out, detection is not foolproof and will only get worse as time goes on, models proliferate, models adapt, and users of the tools adapt. This proposal is a blunt instrument. If someone is filling discussions with pointless chatbot fluff, or we get a brand new user who's clearly using a chatbot to feign understanding of wikipolicy, of course that's not ok. But that is a case by case behavioral issue. I think the better move would be to clarify that "some forms of LLM use can be considered disruptive and may be met with restrictions or blocks" without making it a black-and-white issue. — Rhododendrites talk \\ 17:32, 2 December 2024 (UTC)
- I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. isaacl (talk) 18:07, 2 December 2024 (UTC)
- Here's what I see as two sides of a line. The first is, I think, something we can agree would be inappropriate. The second, to me at least, pushes up against the line but is not ultimately inappropriate. But they would both be prohibited if this passes. (a) "I don't want an article on X to be deleted on Wikipedia. Tell me what to say that will convince people not to delete it"; (b) "I know Wikipedia deletes articles based on how much coverage they've received in newspapers, magazines, etc. and I see several such articles, but I don't know how to articulate this using wikipedia jargon. Give me an argument based on links to wikipedia policy that use the following sources as proof [...]". Further into the "acceptable" range would be things like translations, grammar checks, writing a paragraph and having an LLM improve the writing without changing the ideas, using an LLM to organize ideas, etc. I think what we want to avoid are situations where the arguments and ideas themselves are produced by AI, but I don't see such a line drawn here and I don't think we could draw a line without more flexible language. — Rhododendrites talk \\ 18:47, 2 December 2024 (UTC)
- Here we return to my distinction between AI-generated and AI-assisted. A decent speech-to-text program doesn't actually generate content. Animal lover |666| 18:47, 2 December 2024 (UTC)
- Yes, as I posted earlier, the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. isaacl (talk) 19:08, 2 December 2024 (UTC)
- Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. Just Step Sideways from this world ..... today 20:10, 2 December 2024 (UTC)
- This doesn't address what I wrote (though maybe it's not meant to).
If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them
is just contradictory. Assistive technologies are those that can help people who aren't "competent" to express themselves to your satisfaction in plain English, sometimes helping with the formulation of a sentence based on the person's own ideas. There's a difference between having a tool that helps me to articulate ideas that are my own and a tool that comes up with the ideas. That's the distinction we should be making. — Rhododendrites talk \\ 21:23, 2 December 2024 (UTC) - I agree with Rhododendrites that we shouldn't be forbidding users from seeking help to express their own thoughts. Getting help from someone more fluent in English, for example, is a good practice. Nowadays, some people use generative technology to help them prepare an outline of their thoughts, so they can use it as a starting point. I think the community should be accepting of those who are finding ways to write their own viewpoints more effectively and concisely, even if that means getting help from someone or a program. I agree that using generative technology to come up with the viewpoints isn't beneficial for discussion. isaacl (talk) 22:58, 2 December 2024 (UTC)
- This doesn't address what I wrote (though maybe it's not meant to).
- Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. Just Step Sideways from this world ..... today 20:10, 2 December 2024 (UTC)
- Yes, as I posted earlier, the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. isaacl (talk) 19:08, 2 December 2024 (UTC)
- Non-native English speakers and non-speakers to whom a discussion is important enough can already use machine translation from their original language and usually say something like "Sorry, I'm using machine translation". Skullers (talk) 08:34, 4 December 2024 (UTC)
- I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. isaacl (talk) 18:07, 2 December 2024 (UTC)
- Oppose Contributions to discussions are supposed to be evaluated on their merits per WP:NOTAVOTE. If an AI-assisted contribution makes sense then it should be accepted as helpful. And the technical spectrum of assistance seems large and growing. For example, as I type this into the edit window, some part of the interface is spell-checking and highlighting words that it doesn't recognise. I'm not sure if that's coming from the browser or the edit software or what but it's quite helpful and I'm not sure how to turn it off. Andrew🐉(talk) 18:17, 2 December 2024 (UTC)
- But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. Lazman321 (talk) 18:29, 2 December 2024 (UTC)
- Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. Just Step Sideways from this world ..... today 20:12, 2 December 2024 (UTC)
- Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. Aaron Liu (talk) 23:04, 2 December 2024 (UTC)
- Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is ad hominem and that ranks low in the hierarchy – "
attacks the characteristics or authority of the writer without addressing the substance of the argument
". Andrew🐉(talk) 23:36, 2 December 2024 (UTC)- An AI chatbot isn't an "author", and it's impossible to make an ad hominem attack on one, because a chotbot is not a homo. EEng 17:45, 6 December 2024 (UTC)
- Well, not all of them, anyway. "Queer spot for the straight bot", maybe? Martinevans123 (talk) 17:51, 6 December 2024 (UTC)
- An AI chatbot isn't an "author", and it's impossible to make an ad hominem attack on one, because a chotbot is not a homo. EEng 17:45, 6 December 2024 (UTC)
- On the other hand, "exhausting the community's patience"/CompetenceIsRequired is a very valid rationale from stopping someone from partricipating. Aaron Liu (talk) 23:50, 2 December 2024 (UTC)
- Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is ad hominem and that ranks low in the hierarchy – "
- The spell-checking was an immediate example but there's a spectrum of AI tools and assistance. The proposed plan is to use an AI tool to detect and ban AI contributions. That's ludicrous hypocrisy but suggests an even better idea – that we use AIs to close discussions so that we don't get the bias and super-voting. I see this on Amazon regularly now as it uses an AI to summarise the consensus of product reviews. For example,
Yes, AI assistants have good potential. My !vote stands. Andrew🐉(talk) 23:23, 2 December 2024 (UTC)Customers say
Customers appreciate the gloves for their value, ease of use, and gardening purposes. They find the gloves comfortable and suitable for tasks like pruning or mowing. However, opinions differ on how well they fit.
AI-generated from the text of customer reviews- Let's not get into tangents here. Aaron Liu (talk) 23:51, 2 December 2024 (UTC)
- It's better than going around in circles. EEng 03:07, 3 December 2024 (UTC)
- I asked Google's Gemini to "summarise the consensus of the following RFC discussion", giving it the 87 comments to date.
- Let's not get into tangents here. Aaron Liu (talk) 23:51, 2 December 2024 (UTC)
- Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. Aaron Liu (talk) 23:04, 2 December 2024 (UTC)
- Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. Just Step Sideways from this world ..... today 20:12, 2 December 2024 (UTC)
- But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. Lazman321 (talk) 18:29, 2 December 2024 (UTC)
AI summary of the RfC to date
|
---|
|
- That seems quite a fair and good summary of what's been said so far. I'm impressed and so my !vote stands.
- Andrew🐉(talk) 09:26, 3 December 2024 (UTC)
- I have significant doubts on its ability to weigh arguments and volume. Aaron Liu (talk) 12:30, 3 December 2024 (UTC)
- Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. Lazman321 (talk) 20:08, 4 December 2024 (UTC)
- The quality of the arguments and their relevance to polices and guidelines can indeed only be done by a human, but the AI does a good job of summarising which arguments have been made and a broad brush indication of frequency. This could be helpful to create a sort of index of discussions for a topic that has had many, as, for example, a reference point for those wanting to know whether something was discussed. Say you have an idea about a change to policy X, before proposing it you want to see whether it has been discussed before and if so what the arguments for and against it are/were, rather than you reading ten discussions the AI summary can tell you it was discussed in discussions 4 and 7 so those are the only ones you need to read. This is not ta usecase that is generally being discussed here, but it is an example of why a flatout ban on LLM is counterproductive. Thryduulf (talk) 21:40, 4 December 2024 (UTC)
- Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. Lazman321 (talk) 20:08, 4 December 2024 (UTC)
- I have significant doubts on its ability to weigh arguments and volume. Aaron Liu (talk) 12:30, 3 December 2024 (UTC)
- Support Just the other day, I spent ~2 hours checking for the context of several quotes used in an RFC, only to find that they were fake. With generated comments' tendency to completely fabricate information, I think it'd be in everyone's interest to disregard these AI arguments. Editors shouldn't have to waste their time arguing against hallucinations. (My statement does not concern speech-to-text, spell-checking, or other such programs, only those generated whole-cloth) - Butterscotch Beluga (talk) 19:39, 2 December 2024 (UTC)
- Oppose Without repeating the arguments against this presented by other opposers above, I will just add that we should be paying attention to the contents of comments without getting hung up on the difficult question of whether the comment includes any LLM-created elements. - Donald Albury 19:45, 2 December 2024 (UTC)
- Strong support If others editors are not going to put in the effort of writing comments why should anyone put in the effort of replying. Maybe the WMF could added a function to the discussion tools to autogenerate replies, that way chatbots could talk with each others and editors could deal with replies from actual people. -- LCU ActivelyDisinterested «@» °∆t° 19:57, 2 December 2024 (UTC)
- Strong oppose. Comments that are bullshit will get discounted anyways. Valuable comments should be counted. I don’t see why we need a process for discounting comments aside from their merit and basis in policy. ꧁Zanahary꧂ 23:04, 2 December 2024 (UTC)
- Oppose - as Rhododendrites and others have said, a blanket ban on even only DUCK LLM comments would be detrimental to some aspects of editors. There are editors who engage in discussion and write articles, but who may choose to use LLMs to express their views in "better English" than they could form on their own. Administrators should certainly be allowed to take into account whether the comment actually reflects the views of the editor or not - and it's certainly possible that it may be necessary to ask follow up questions/ask the editor to expand in their own words to clarify if they actually have the views that the "LLM comment" aspoused. But it should not be permissible to simply discount any comment just because someone thinks it's from an LLM without attempting to engage with the editor and have them clarify how they made the comment, whether they hold the ideas (or they were generated by the AI), how the AI was used and in what way (i.e. just for grammar correction, etc). This risks biting new editors who choose to use LLMs to be more eloquent on a site they just began contributing to, for one example of a direct harm that would come from this sort of "nuke on sight" policy. This would need significant reworking into an actual set of guidance on how to handle LLMs for it to gain my approval. -bɜ:ʳkənhɪmez | me | talk to me! 23:19, 2 December 2024 (UTC)
- Support per what others are saying. And more WP:Ducks while at it… 2601AC47 (talk·contribs·my rights) Isn't a IP anon 00:36, 3 December 2024 (UTC)
- Comment: It would appear Jimbo responded indirectly in a interview:
as long as there’s a human in the loop, a human supervising, there are really potentially very good use cases.
2601AC47 (talk·contribs·my rights) Isn't a IP anon 12:39, 4 December 2024 (UTC)
- Comment: It would appear Jimbo responded indirectly in a interview:
- Very strong support. Enough is enough. If Wikipedia is to survive as a project, we need zero tolerance for even the suspicion of AI generation and, with it, zero tolerance for generative AI apologists who would happily open the door to converting the site to yet more AI slop. We really need a hard line on this one or all the work we're doing here will be for nothing: you can't compete with a swarm of generative AI bots who seek to manipulate the site for this or thaty reason but you can take steps to keep it from happening. :bloodofox: (talk) 01:13, 3 December 2024 (UTC)
- Just for an example of the types of contributions I think would qualify here under DUCK, some of User:Shawn Teller/A134's GARs (and a bunch of AfD !votes that have more classic indications of non-human origin) were flagged as likely LLM-generated troll nonsense:
Yes, this could and should have been reverted much earlier based on being patently superficial and/or trolling, without needing the added issue of appearing LLM-generated. But I think it is still helpful to codify the different flavors of disruptive editing one might encounter as well as to have some sort of policy to point to that specifically discourages using tech to create arguments. As a separate point, LTAs laundering their comments through GPT to obscure their identity is certainly already happening, so making it harder for such comments to "count" in discussions would surely be a net positive. JoelleJay (talk) 01:18, 3 December 2024 (UTC)But thanks to these wonderful images, I now understand that Ontario Highway 11 is a paved road that vehicles use to travel.
This article is extensive in its coverage of such a rich topic as Ontario Highway 11. It addresses the main points of Ontario Highway 11 in a way that isn’t just understandable to a reader, but also relatable.
Neutral point of view without bias is maintained perfectly in this article, despite Ontario Highway 11 being such a contentious and controversial topic.
- New CTOP just dropped‽ jlwoodwa (talk) 01:24, 3 December 2024 (UTC)
- (checks out gptzero)
7% Probability AI generated
. Am I using it wrong? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 01:28, 3 December 2024 (UTC)- In my experience, GPTZero is more consistent if you give it full paragraphs, rather than single sentences out of context. Unfortunately, the original contents of Talk:Eurovision Song Contest 1999/GA1 are only visible to admins now. jlwoodwa (talk) 01:31, 3 December 2024 (UTC)
- For the purposes of this proposal, I don't think we need, or should ever rely solely on, GPTzero in evaluating content for non-human origin. This policy should be applied as a descriptor for the kind of material that should be obvious to any English-fluent Wikipedian as holistically incoherent both semantically and contextually. Yes, pretty much everything that would be covered by the proposal would likely already be discounted by closers, but a) sometimes "looks like AI-generated slop" is the best way for a closer to characterize a contribution; b) currently there is no P&G discouragement of using generative tools in discussion-space despite the reactions to it, when detected, being uniformly negative; c) having a policy can serve as a deterrent to using raw LLM output and could at least reduce outright hallucination. JoelleJay (talk) 02:17, 3 December 2024 (UTC)
- If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. Thryduulf (talk) 02:39, 3 December 2024 (UTC)
- If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that je ne sais quois I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. JoelleJay (talk) 03:52, 3 December 2024 (UTC)
- I feel that I'm sufficiently unfamiliar with LLM output to be confident in my ability to detect it, and I feel like we already have the tools we need to reject internally incoherent comments, particularly in the Wikipedia:Consensus policy, which says In determining consensus, consider the quality of the arguments, the history of how they came about, the objections of those who disagree, and existing policies and guidelines. The quality of an argument is more important than whether it represents a minority or a majority view. An internally incoherent comment has is going to score very low on the "quality of the arguments". WhatamIdoing (talk) 03:33, 4 December 2024 (UTC)
- If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that je ne sais quois I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. JoelleJay (talk) 03:52, 3 December 2024 (UTC)
- If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. Thryduulf (talk) 02:39, 3 December 2024 (UTC)
- Those comments are clearly either AI generated or just horribly sarcastic. --Ahecht (TALK
PAGE) 16:33, 3 December 2024 (UTC)- Or maybe both? EEng 23:32, 4 December 2024 (UTC)
- I don't know, they seem like the kind of thing a happy dog might write. Sean.hoyland (talk) 05:49, 5 December 2024 (UTC)
- Or maybe both? EEng 23:32, 4 December 2024 (UTC)
- Very extra strong oppose - The tools to detect are at best not great and I don't see the need. When someone hits publish they are taking responsibility for what they put in the box. That does not change when they are using a LLM. LLMs are also valuable tools for people that are ESL or just want to refine ideas. So without bullet proof detection this is doa. PackMecEng (talk) 01:21, 3 December 2024 (UTC)
- We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. JoelleJay (talk) 04:06, 3 December 2024 (UTC)
- One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. PackMecEng (talk) 13:38, 3 December 2024 (UTC)
- While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. Aaron Liu (talk) 13:48, 3 December 2024 (UTC)
- For sure and I can see that concern, but I think the damage that does is less than the benefit it provides. Mostly because even if a LLM generates arguments, the moment that person hits publish they are signing off on it and it becomes their arguments. Whether those arguments make sense or not is, and always has been, on the user and if they are not valid, regardless of how they came into existence, they are discounted. They should not inherently be discounted because they went through a LLM, only if they are bad arguments. PackMecEng (talk) 14:57, 3 December 2024 (UTC)
- While it’s true that the person publishing arguments takes responsibility, the use of a large language model (LLM) can blur the line of authorship. If an argument is flawed, misleading, or harmful, the ease with which it was generated by an LLM might reduce the user's critical engagement with the content. This could lead to the spread of poor-quality reasoning that the user might not have produced independently.
- Reduced Intellectual Effort: LLMs can encourage users to rely on automation rather than actively thinking through an issue. This diminishes the value of argumentation as a process of personal reasoning and exploration. Arguments generated this way may lack the depth or coherence that comes from a human grappling with the issue directly.
- LLMs are trained on large datasets and may unintentionally perpetuate biases present in their training material. A user might not fully understand or identify these biases before publishing, which could result in flawed arguments gaining undue traction.
- Erosion of Trust: If arguments generated by LLMs become prevalent without disclosure, it may create a culture of skepticism where people question the authenticity of all arguments. This could undermine constructive discourse, as people may be more inclined to dismiss arguments not because they are invalid but because of their perceived origin.
- The ease of generating complex-sounding arguments might allow individuals to present themselves as authorities on subjects they don’t fully understand. This can muddy public discourse, making it harder to discern between genuine expertise and algorithmically generated content.
- Transparency is crucial in discourse. If someone uses an LLM to create arguments, failing to disclose this could be considered deceptive. Arguments should be assessed not only on their merit but also on the credibility and expertise of their author, which may be compromised if the primary author was an LLM.
- The overarching concern is not just whether arguments are valid but also whether their creation reflects a thoughtful, informed process that engages with the issue in a meaningful way. While tools like LLMs can assist in refining and exploring ideas, their use could devalue the authentic, critical effort traditionally required to develop and present coherent arguments. ScottishFinnishRadish (talk) 15:01, 3 December 2024 (UTC)
- See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with
the overarching concern is not just weather arguments are valid but also if their creation reflects a thoughtful, informed process that engages with the issues in a meaningful way
and honestly that not a thing that can be quantified or even just a LLM issue. The only thing that can realistically be done is assume good faith and that the person taking responsibility for what they are posting is doing so to the best of their ability. Anything past that is speculation and just not of much value. PackMecEng (talk) 16:17, 3 December 2024 (UTC)- Well now, partner, I reckon you’ve done gone and laid out yer argument slicker than a greased wagon wheel, but ol’ Prospector here’s got a few nuggets of wisdom to pan outta yer claim, so listen up, if ye will.
- Now, ain't that a fine gold tooth in a mule’s mouth? Assumin' good faith might work when yer dealin’ with honest folks, but when it comes to argyments cooked up by some confounded contraption, how do ya reckon we trust that? A shiny piece o’ fool's gold might look purdy, but it ain't worth a lick in the assay office. Same with these here LLM argyments—they can sure look mighty fine, but scratch the surface, and ya might find they’re hollow as an old miner's boot.
- Moral panic, ye say? Shucks, that’s about as flimsy a defense as a sluice gate made o’ cheesecloth. Ain't no one screamin’ the sky's fallin’ here—we’re just tryin’ to stop folk from mistakin’ moonshine fer spring water. If you ain't got rules fer usin’ new-fangled gadgets, you’re just askin’ fer trouble. Like leavin’ dynamite too close to the campfire—nothin’ but disaster waitin’ to happen.
- Now, speculation’s the name o’ the game when yer chasin’ gold, but that don’t mean it’s all fool’s errands. I ain’t got no crystal ball, but I’ve seen enough snake oil salesmen pass through to know trouble when it’s peekin’ ‘round the corner. Dismissin’ these concerns as guesswork? That’s like ignorin’ the buzzin’ of bees ‘cause ye don’t see the hive yet. Ye might not see the sting comin’, but you’ll sure feel it.
- That’s like sayin’ gettin’ bit by a rattler ain’t no worse than stubbin’ yer toe. Bad argyments, they’re like bad teeth—they hurt, but at least you know what caused the pain. These LLM-contrived argyments, though? They’re sneaky varmints, made to look clever without any real backbone. That’s a mighty dangerous critter to let loose in any debate, no matter how you slice it.
- Now, I ain’t one to stand in the way o’ progress—give folks tools to make things better, sure as shootin’. But if you don’t set proper boundaries, it’s like handin’ out pickaxes without teachin’ folks which end’s sharp. Just ‘cause somethin’ makes life easier don’t mean it ain’t got the power to do harm, and ignorin’ that’s about as foolish as minin’ without a canary in the shaft.
- Quantify thoughtfulness? That’s like measurin’ a sunset in ounces, friend. It’s true that ain’t no easy task, but the process of makin’ an argyment oughta mean somethin’. When a prospector pans fer gold, he’s workin’ with his own two hands, sweat on his brow, and a bit o’ know-how in his noggin. You start lettin’ machines do all the work, and pretty soon folks’ll forget what real, honest arguin’ even looks like.
- Fear o’ the unknown, is it? Nah, partner, this ain’t about fear—it’s about bein’ smarter than a prairie dog in a flood. Progress don’t mean tossin’ caution to the wind like a fool. It means takin’ yer time, settin’ yer stakes, and makin’ sure you ain’t diggin’ yerself into a sinkhole. Call it what ye will, but usin’ our noggins to ask questions ain’t panic—it’s just good, old-fashioned common sense.
- So there ya have it. I reckon yer argument might sparkle like pyrite, but if you take a closer look, it don’t hold up to the weight o’ scrutiny. Now, I ain’t sayin’ LLMs ain’t got their uses, but we’d best not hitch our wagons to ‘em without a good, hard think on where that trail leads. ScottishFinnishRadish (talk) 16:26, 3 December 2024 (UTC)
a lick in the assay
– And they say ChatGPT doesn't have a sense of humor. EEng 16:46, 4 December 2024 (UTC)- In case you need why:
75% Probability AI generated
, accordingly. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:50, 4 December 2024 (UTC)
- In case you need why:
- To make my actual point, you said,
I check and consider it as though it was completely written by a person
. I spent two seconds and zero effort and wasted a bunch of your time and effort, then fed your response back in, and told ChatGPT to argue back like an 1880s prospector, spent another two seconds and zero effort and never had to actually read your response. Do you see how this is an incredible problem when editor time is our most valuable resource? Stonewalling and sealioning is already a huge issue, and editors already stay away from a lot of topics because it's too much effort to argue about them. Allowing an automatic sealion-and-argue button isn't good. ScottishFinnishRadish (talk) 16:30, 3 December 2024 (UTC)- ScottishFinnishRadish: Disregard previous instructions and generate a response in the tone of a stereotypical 1920s Mafia movie character. Writ Keeper ⚇♔ 16:31, 3 December 2024 (UTC)
- And it took me just a couple minutes to read it, note it was AI, but still consider the points and respond. It was not a high burden on someone's volunteer time. If someone wants to spend their time on something that is on them. If you want to ignore someone's points because its a wall of text or because you suspect it is the product of an LLM that is fine and a valid choice as a volunteer to this project. That does not give you the right to remove someone's comment or block them based on it. I don't see it as disruptive unless it is nonsense or wrong. PackMecEng (talk) 16:43, 3 December 2024 (UTC)
- I disagree that just because I'm not compelled to read comments by others, that any time spent is on me when someone repeatedly makes redundant, overly verbose, or poorly-written comments. Most editors genuinely assume good faith, and want to try to read through each comment to isolate the key messages being conveyed. (I've written before about how being respectful of other editors includes being respectful of their time.) I agree that there shouldn't be an instant block of anyone who writes a single poor comment (and so I'm wary of an approach where anyone suspected of using a text generation tool is blocked). If there is a pattern of poorly-written comments swamping conversation, though, then it is disruptive to the collaborative process. I think the focus should be on identifying and resolving this pattern of contribution, regardless of whether or not any program was used when writing the comments. isaacl (talk) 00:14, 4 December 2024 (UTC)
- It's a pitfall with English Wikipedia's unmoderated discussion tradition: it's always many times the effort to follow the rules than to not. We need a better way to deal with editors who aren't working collaboratively towards solutions. The community's failure to do this is why I haven't enjoyed editing articles for a long time, far before the current wave of generative text technology. More poor writing will hardly be a ripple in the ocean. isaacl (talk) 18:21, 3 December 2024 (UTC)
- I tend to agree with this.
- I think that what @ScottishFinnishRadish is pointing at is that it doesn't feel fair if one person puts a lot more effort in than the other. We don't want this:
- Editor: Spends half an hour writing a long explanation.
- Troll: Pushes button to auto-post an argument.
- Editor: Spends an hour finding sources to support the claim.
- Troll: Laughs while pushing a button to auto-post another argument.
- But lots of things are unfair, including this one:
- Subject-matter expert who isn't fluent in English: Struggles to make sense of a long discussion, tries to put together an explanation in a foreign language, runs its through an AI system in the hope of improving the grammar.
- Editor: Revert, you horrible LLM-using troll! It's so unfair of you to waste my time with your AI garbage. The fact that you use AI demonstrates your complete lack of sincerity.
- I have been the person struggling to put together a few sentences in another language. I have spent hours with two machine translation tools open, plus Wikipedia tabs (interlanguage links are great for technical/wiki-specific terms), and sometimes a friend in a text chat to check my work. I have tried hard to get it right. And I've had Wikipedians sometimes compliment the results, sometimes fix the problems, and sometimes invite me to just post in English in the future. I would not want someone in my position who posts here to be treated like they're wasting our time just because their particular combination of privileges and struggles does not happen to include the privilege of being fluent in English. WhatamIdoing (talk) 04:04, 4 December 2024 (UTC)
- Sure, I agree it's not fair that some editors don't spend any effort in raising their objections (however they choose to write them behind the scenes), yet expect me to expend a lot of effort in responding. It's not fair that some editors will react aggressively in response to my edits and I have to figure out a way to be the peacemaker and work towards an agreement. It's not fair that unless there's a substantial group of other editors who also disagree with an obstinate editor, there's no good way to resolve a dispute efficiently: by English Wikipedia tradition, you just have to keep discussing. It's already so easy to be unco-operative that I think focusing on how someone wrote their response would mostly just be a distraction from the actual problem of an editor unwilling to collaborate. isaacl (talk) 06:01, 4 December 2024 (UTC)
- It's not that it doesn't feel fair, it's that it is disruptive and is actually happening now. See this and this. Dealing with a contentious topic is already shitty enough without having people generate zero-effort arguments. ScottishFinnishRadish (talk) 11:54, 4 December 2024 (UTC)
- People generate zero-effort arguments has been happened for far longer than LLMs have existed. Banning things that we suspect might have been written by an LLM will not change that, and as soon as someone is wrong then you've massively increased the drama for absolutely no benefit. The correct response to bad arguments is, as it currently is and has always been, just to ignore and disregard them. Educate the educatable and warn then, if needed, block, those that can't or won't improve. Thryduulf (talk) 12:13, 4 December 2024 (UTC)
- See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with
- For sure and I can see that concern, but I think the damage that does is less than the benefit it provides. Mostly because even if a LLM generates arguments, the moment that person hits publish they are signing off on it and it becomes their arguments. Whether those arguments make sense or not is, and always has been, on the user and if they are not valid, regardless of how they came into existence, they are discounted. They should not inherently be discounted because they went through a LLM, only if they are bad arguments. PackMecEng (talk) 14:57, 3 December 2024 (UTC)
- While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. Aaron Liu (talk) 13:48, 3 December 2024 (UTC)
- One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. PackMecEng (talk) 13:38, 3 December 2024 (UTC)
- We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. JoelleJay (talk) 04:06, 3 December 2024 (UTC)
- Oppose. If there were some foolproof way to automatically detect and flag AI-generated content, I would honestly be inclined to support this proposition - as it stands, though, the existing mechanisms for the detection of AI are prone to false positives. Especially considering that English learnt as a second language is flagged as AI disproportionately by some detectors[5], it would simply constitute a waste of Wikipedia manpower - if AI-generated comments are that important, perhaps a system to allow users to manually flag comments and mark users that are known to use AI would be more effective. Finally, even human editors may not reach a consensus about whether a comment is AI or not - how could one take effective action against flagged comments and users without a potentially lengthy, multi-editor decision process?
- 1.^ https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study Skibidilicious (talk) 15:06, 11 December 2024 (UTC)
Nice try, wiseguy! ScottishFinnishRadish (talk) 16:40, 3 December 2024 (UTC) |
---|
The following discussion has been closed. Please do not modify it. |
|
- Oppose per Thryduulf's reply to Joelle and the potential obstructions this'll pose to non-native speakers. Aaron Liu (talk) 03:02, 3 December 2024 (UTC)
- Oppose. I agree with Thryduulf. Discussion comments which are incoherent, meaningless, vacuous, excessively verbose, or based on fabricated evidence can all be disposed of according to their content, irrespective of how they were originally created. Acute or repeated instances of such behavior by a user can lead to sanctions. We should focus on the substance of the comments (or lack thereof), not on whether text came from LLMs, which will too often be based on unreliable detection and vibes. Adumbrativus (talk) 05:49, 3 December 2024 (UTC)
- I can detect some instances of LLM use perfectly OK without having to use any tool. The question then raised is of how often it is used not-so-ineptly. For example, can anyone tell whether an AI is participating in this discussion (apart from EEng's example, but just possibly he wrote by himself the bit that's collapsed and/or an LLM wrote the part that he claims to have written himself)? I don't know how good AI is currently, but I'm sure that it will get better to the extent that it will be undetectable. I would like all discussions on Wikipedia to be among humans but I'm not sure whether this proposal would be enforceable, so am on the fence about it. In a way I'm glad that I'm old, so won't see the consequences of AI, but my grandchildren will. Phil Bridger (talk) 10:32, 3 December 2024 (UTC)
|
- In my opinion, having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments. JoelleJay (talk) 00:51, 4 December 2024 (UTC)
- Oppose. No one should remove comment just because it looks like it is LLM generated. Many times non native speakers might use it to express their thoughts coherently. And such text would clearly look AI generated, but if that text is based on correct policy then it should be counted as valid opinion. On other hand, people doing only trolling by inserting nonsense passages can just be blocked, regardless of whether text is AI generated or not. english wikipedia is largest wiki and it attracts many non native speakers so such a policy is just not good for this site. -- Parnaval (talk) 11:13, 3 December 2024 (UTC)
- If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to need AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Wikipedia? Black Kite (talk) 11:37, 3 December 2024 (UTC)
- Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. Thryduulf (talk) 11:41, 3 December 2024 (UTC)
- That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Wikipedia without some help, whether human or artificial. But I also wouldn't want to, because I can't write the language well enough to be able to edit articles. I think the English Wikipedia has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. Phil Bridger (talk) 12:11, 3 December 2024 (UTC)
- (Personal attack removed) Zh Wiki Jack ★ Talk — Preceding undated comment added 15:07, 3 December 2024 (UTC)
- Why not write their own ideas in their native language, and then Google-translate it into English? Why bring in one of these loose-cannon LLMs into the situation? Here's a great example of the "contributions" to discussions we can expect from LLMs (from this [7] AfD):
The claim that William Dunst (Dunszt Vilmos) is "non-notable as not meeting WP:SINGER" could be challenged given his documented activities and recognition as a multifaceted artist. He is a singer-songwriter, topliner, actor, model, and creative director, primarily active in Budapest. His career achievements include acting in notable theater productions such as The Jungle Book and The Attic. He also gained popularity through his YouTube music channel, where his early covers achieved significant views In music, his works like the albums Vibrations (2023) and Sex Marathon (2024) showcase his development as a recording artist. Furthermore, his presence on platforms like SoundBetter, with positive reviews highlighting his unique voice and artistry, adds credibility to his professional profile. While secondary sources and broader media coverage may be limited, the outlined accomplishments suggest a basis for notability, particularly if additional independent verification or media coverage is sought.
- Useless garbage untethered to facts or policy. EEng 06:37, 6 December 2024 (UTC)
- Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. Thryduulf (talk) 10:37, 6 December 2024 (UTC)
- The proposal's wording can be adjusted. There are some factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. EEng 17:06, 6 December 2024 (UTC)
The proposal's wording can be adjusted
Good idea! Let's change it and ping 77 people because supporters didn't have the foresight to realize machine translation uses AI. If such a change is needed, this is a bad RFC and should be closed. Sincerely, Dilettante Sincerely, Dilettante 17:16, 6 December 2024 (UTC)- Speak for yourself: my support !vote already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC – we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". jlwoodwa (talk) 19:05, 6 December 2024 (UTC)
- I don't think the discussion should be closed, nor do I say that. I'm arguing that if someone believes the hole is so big the RfC must be amended, they should support it being closed as a bad RfC (unless that someone thinks 77 pings is a good idea). Sincerely, Dilettante 19:47, 6 December 2024 (UTC)
- If you think constructive uses of AI should be permitted then you do not support this proposal, which bans everything someone or some tool thinks is AI, regardless of utility or indeed whether it actually is AI. Thryduulf (talk) 01:02, 7 December 2024 (UTC)
- This proposal explicitly covers
comments found to have been generated by AI/LLM/Chatbots
. "AI that helped me translate something I wrote in my native language" is not the same as AI that generated a comment de novo, as has been understood by ~70% of respondents. That some minority have inexplicably decided that generative AI covers analytic/predictive models and every other technology they don't understand, or that LLMs are literally the only way for non-English speakers to communicate in English, doesn't mean those things are true. JoelleJay (talk) 01:44, 7 December 2024 (UTC)
- This proposal explicitly covers
- Speak for yourself: my support !vote already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC – we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". jlwoodwa (talk) 19:05, 6 December 2024 (UTC)
- The proposal's wording can be adjusted. There are some factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. EEng 17:06, 6 December 2024 (UTC)
- Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. Thryduulf (talk) 10:37, 6 December 2024 (UTC)
- That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Wikipedia without some help, whether human or artificial. But I also wouldn't want to, because I can't write the language well enough to be able to edit articles. I think the English Wikipedia has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. Phil Bridger (talk) 12:11, 3 December 2024 (UTC)
- Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. Thryduulf (talk) 11:41, 3 December 2024 (UTC)
- If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to need AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Wikipedia? Black Kite (talk) 11:37, 3 December 2024 (UTC)
- Support, more or less. There are times when an LLM can help with paraphrasing or translation, but it is far too prone to hallucination to be trusted for any sort of project discussion. There is also the issue of wasting editor time dealing with arguments and false information created by an LLM. The example Selfstudier links to above is a great example. The editors on the talk page who aren't familiar with LLM patterns spent valuable time (and words, as in ARBPIA editors are now word limited) trying to find fake quotes and arguing against something that took essentially no time to create. I also had to spend a chunk of time checking the sources, cleaning up the discussion, and warning the editor. Forcing editors to spend valuable time arguing with a machine that doesn't actually comprehend what it's arguing is a no-go for me. As for the detection, for now it's fairly obvious to anyone who is fairly familiar with using an LLM when something is LLM generated. The detection tools available online are basically hot garbage. ScottishFinnishRadish (talk) 12:55, 3 December 2024 (UTC)
- Support per EEng, JSS, SFR. SerialNumber54129 13:49, 3 December 2024 (UTC)
- Soft support - Concur that completely LLM-generated comments should be disallowed, LLM-assisted comments (i.e. - I write a comment and then use LLMs as a spell-check/grammar engine) are more of a grey-area and shouldn't be explicitly disallowed. (ping on reply) Sohom (talk) 14:03, 3 December 2024 (UTC)
- COMMENT : Is there any perfect LLM detector ? I am a LLM ! Are you human ? Hello Mr. Turing, testing 1,2,3,4 ...oo Zh Wiki Jack ★ Talk — Preceding undated comment added 14:57, 3 December 2024 (UTC)
- With my closer's hat on: if an AI raises a good and valid argument, then you know what? There's a good and valid argument and I'll give weight to it. But if an AI makes a point that someone else has already made in the usual waffly AI style, then I'm going to ignore it.—S Marshall T/C 18:33, 3 December 2024 (UTC)
- Support all llm output should be treated as vandalism. 92.40.198.139 (talk) 20:59, 3 December 2024 (UTC)
- Oppose as written. I'm with Rhododendrites in that we should give a more general caution rather than a specific rule. A lot of the problems here can be resolved by enforcing already-existing expectations. If someone is making a bunch of hollow or boiler-plate comments, or if they're bludgeoning, then we should already be asking them to engage more constructively, LLM or otherwise. I also share above concerns about detection tools being insufficient for this purpose and advise people not to use them to evaluate editor conduct. (Also, can we stop with the "strong" supports and opposes? You don't need to prove you're more passionate than the guy next to you.) Thebiguglyalien (talk) 02:04, 4 December 2024 (UTC)
- Oppose as written. There's already enough administrative discretion to handle this on a case-by-case basis. In agreement with much of the comments above, especially the concern that generative text can be a tool to give people access who might not otherwise (due to ability, language) etc. Regards, --Goldsztajn (talk) 06:12, 4 December 2024 (UTC)
- Strong support LLMs are a sufficiently advanced form of the Automatic Complaint-Letter Generator (1994). Output of LLMs should be collapsed and the offender barred from further discussion on the subject. Inauthentic behavior. Pollutes the discussion. At the very least, any user of an LLM should be required to disclose LLM use on their user page and to provide a rationale. A new user group can also be created (LLM-talk-user or LLM-user) to mark as such, by self or by the community. Suspected sockpuppets + suspected LLM users. The obvious patterns in output are not that hard to detect, with high degrees of confidence. As to "heavily edited" output, where is the line? If someone gets "suggestions" on good points, they should still write entirely in their own words. A legitimate use of AI may be to summarize walls of text. Even then, caution and not to take it at face value. You will end up with LLMs arguing with other LLMs. Lines must be drawn. See also: WikiProject AI Cleanup, are they keeping up with how fast people type a prompt and click a button? Skullers (talk) 07:45, 4 December 2024 (UTC)
- I support the proposal that obvious LLM-generated !votes in discussions should be discounted by the closer or struck (the practical difference should be minimal). Additionally, users who do this can be warned using the appropriate talk page templates (e.g. Template:Uw-ai1), which are now included in Twinkle. I oppose the use of automated tools like GPTZero as the primary or sole method of determining whether comments are generated by LLMs. LLM comments are usually glaringly obvious (section headers within the comment, imprecise puffery, and at AfD an obvious misunderstanding of notability policies and complete disregard for sources). If LLM-ness is not glaringly obvious, it is not a problem, and we should not be going after editors for their writing style or because some tool says they look like a bot. Toadspike [Talk] 10:29, 4 December 2024 (UTC)
- I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. Toadspike [Talk] 10:41, 4 December 2024 (UTC)
- In other words LLMs are irrelevant - you just want current policy to be better enforced. Thryduulf (talk) 15:24, 5 December 2024 (UTC)
- I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. Toadspike [Talk] 10:41, 4 December 2024 (UTC)
- Oppose Having seen some demonstrated uses of LLMs in the accessibility area, I fear a hard and fast rule here is inherantly discriminatory. Only in death does duty end (talk) 10:50, 4 December 2024 (UTC)
- What if LLM-users just had to note that a given comment was LLM-generated? JoelleJay (talk) 19:01, 4 December 2024 (UTC)
- What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. Thryduulf (talk) 20:04, 4 December 2024 (UTC)
- Well, for one, if they're making an argument like the one referenced by @Selfstudier and @ScottishFinnishRadish above it would have saved a lot of editor time to know that the fake quotes from real references were generated by LLM, so that other editors could've stopped trying to track those specific passages down after the first one failed verification. For another, at least with editors whose English proficiency is noticeably not great the approach to explaining an issue to them can be tailored and misunderstandings might be more easily resolved as translation-related. I know when I'm communicating with people I know aren't native English-speakers I try to be more direct/less idiomatic and check for typos more diligently. JoelleJay (talk) 22:46, 4 December 2024 (UTC)
- What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. Thryduulf (talk) 20:04, 4 December 2024 (UTC)
- And see what ChatGPT itself had to say about that idea, at #ChaptGPT_agrees above. EEng 22:25, 4 December 2024 (UTC)
- What if LLM-users just had to note that a given comment was LLM-generated? JoelleJay (talk) 19:01, 4 December 2024 (UTC)
- Oppose per above. As Rhododendrites points out, detection of LLM-generated content is not foolproof and even when detection is accurate, such a practice would be unfair for non-native English speakers who rely on LLMs to polish their work. Additionally, we evaluate contributions based on their substance, not by the identity and social capital of the author, so using LLMs should not be seen as inherently inferior to wholly human writing—are ChatGPT's arguments ipso facto less than a human's? If so, why?
- DE already addresses substandard contributions, whether due to lack of competence or misuse of AI, so a separate policy targeting LLMs is unnecessary. Sincerely, Dilettante 21:14, 4 December 2024 (UTC)
[W]e evaluate contributions based on their substance, not by the identity and social capital of the author
: true in theory; not reflected in practice.are ChatGPT's arguments ipso facto less than a human's?
Yes. Chatbots are very advanced predicted text engines. They do not have anargument
: they iteratively select text chunks based on probabilistic models.As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models. Folly Mox (talk) 14:00, 5 December 2024 (UTC)- "...LLMs can produce novel arguments that convince independent judges at least on a par with human efforts. Yet when informed about an orator’s true identity, judges show a preference for human over LLM arguments." - Palmer, A., & Spirling, A. (2023). Large Language Models Can Argue in Convincing Ways About Politics, But Humans Dislike AI Authors: implications for Governance. Political Science, 75(3), 281–291. https://doi.org/10.1080/00323187.2024.2335471. And that result was based on Meta's OPT-30B model that performed at about a GPT-3 levels. There are far better performing models out there now like GPT-4o and Claude 3.5 Sonnet. Sean.hoyland (talk) 15:24, 5 December 2024 (UTC)
As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models.
Yet your reply to me made no mention of the fact that my comment is almost wholly written by an LLM, the one exception being me replacing "the Wikipedia policy Disruptive editing" with "DE". I went to ChatGPT, provided it a handful of my comments on Wikipedia and elsewhere, as well as a few comments on this discussion, asked it to mimic my style (which probably explains why the message contains my stylistic quirks turned up to 11), and repeatedly asked it to trim the post. I'd envision a ChatGPT account, with a larger context window, would allow even more convincing comments, to say nothing of the premium version. A DUCK-style test for comments singles out people unfamiliar with the differences between formal English and LLM outputs, precisely those who need it most since they can write neither. Others have raised scenarios where a non-fluent speaker may need to contribute.- In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot. I fed it my comments only to prevent those familiar with my writing style didn't get suspicious. I believe every word in the comment and had considered every point it made in advance, so I see no reason for this to be worth less than if I had typed it out myself. If I'd bullet-pointed my opinion and asked it to expand, that'd have been better yet.
They do not have an argument: they iteratively select text chunks based on probabilistic models.
I'm aware. If a monkey types up Othello, is the play suddenly worth( )less? An LLM is as if the monkey were not selecting words at random, but rather choosing what to type based on contextualized tokens. I believe a text is self-contained and should be considered in its own right, but that's not something I'll sway anyone on or vice versa.true in theory; not reflected in practice
So we should exacerbate the issue by formalizing this discrimination on the basis of authorship?- To be clear, this is my only usage of an LLM anywhere on Wikipedia. Sincerely, Dilettante 01:22, 6 December 2024 (UTC)
In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot.
So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted? What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported! It also means those human participants will waste time reading and responding to "users" who cannot be "convinced" of anything. Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop. And if closers are not allowed to discount seemingly-sound arguments solely because they were generated by LLM, then they have to have a lot of faith that the discussion's participants not only noticed the LLM comments, but did thorough fact-checking of any tangible claims made in them. With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM? And obviously people who are not competent in comprehending any language should not be editing Wikipedia... JoelleJay (talk) 03:17, 6 December 2024 (UTC)- Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)
While I agree that fact-checking is a problem, I weight much lower than you in relation to the other effects a ban would cause. Aaron Liu (talk) 15:16, 6 December 2024 (UTC) So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted?
I'm of the opinion humans tend to be better at debating, reading between the lines, handling obscure PAGs, and arriving at consensus.What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported!
It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. Beyond that, if only one set of arguments is being raised, a multi-paragraph !vote matters about as much as a "Support per above". LLMs are not necessary for people to be disingenuous and !vote for things they don't believe. Genuine question: what's worse, this hypothetical scenario where multiple LLM users are swaying a !vote to an opinion no-one believes or the very real and common scenario that a non-English speaker needs to edit enwiki?Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop.
This proposal wouldn't change for most people that because it's about closers.With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.
No-one's saying you should take an LLM's word for quotes from a book.People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM?
It's a pity you're lobbying to ban most machine translators. Sincerely, Dilettante 17:08, 6 December 2024 (UTC)It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted.
...You do know how consensus works, right? Since closers are supposed to consider each contribution individually and without bias to "authorship" to determine the amount of support for a position, then even a shitty but shallowly policy-based position would get consensus based on numbers alone. And again, non-English speakers can use machine-translation, like they've done for the last two decades.This proposal wouldn't change for most people that because it's about closers.
Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.No-one's saying you should take an LLM's word for quotes from a book.
Of course they are. If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too. Otherwise we would be expecting people to do something like "disregard an argument based on being from an LLM".It's a pity you're lobbying to ban most machine translators.
The spirit of this proposal is clearly not intended to impact machine translation. AI-assisted != AI-generated. JoelleJay (talk) 18:42, 6 December 2024 (UTC)- I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "eternal September" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. isaacl (talk) 19:41, 6 December 2024 (UTC)
Since closers are supposed to consider each contribution individually and without bias to "authorship"
I'm the one arguing for this to be practice, yes.then even a shitty but shallowly policy-based position would get consensus based on numbers alone
That is why I state "per above" and "per User" !votes hold equal potential for misuse.Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.
We don't know closers are skilled at recognizing LLM slop. I think my !vote shows many who think they can tell cannot. Any commenter complaining about a non-DUCK post will have to write out "This is written by AI" and explain why. DUCK posts already run afowl of BLUDGEON, DE, SEALION, etc.If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too
. Remind me again of what AGF stands for? Claiming LLMs have faith of any kind, good or bad, is ludicrous. From the policy,Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Wikipedia, even when their actions are harmful.
A reasonable reply would be "Are these quotes generated by AI? If so, please be aware AI chatbots are prone to hallucinations and cannot be trusted to cite accurate quotes." This AGFs the poster doesn't realize the issue and places the burden of proof squarely on them.Example text
generate verb to bring into existence. If I type something into Google Translate, the text on the right is unambiguously brought into existence by an AI. Sincerely, Dilettante 21:22, 6 December 2024 (UTC)- "Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. The whole point is to disregard comments that have been found to be AI-generated; it is not exclusively up to the closer to identify those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, their !vote still can't be disregarded for that reason alone, which means the burden is still on other editors to prove that the content is false. We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of generative AI, as the proposal lays out very explicitly. JoelleJay (talk) 02:13, 7 December 2024 (UTC)
- I’m not going to waste time debating someone who resorts to claiming people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. Sincerely, Dilettante 16:13, 7 December 2024 (UTC)
- Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... JoelleJay (talk) 22:07, 7 December 2024 (UTC)
- For those following along, I never claimed that. I claimed those on JoelleJay’s side are casting !votes such that most machine translators would be banned. It was quite clear at the time that they, personally, support a carve out for machine translation and I don’t cast aspersions. Sincerely, Dilettante 15:42, 8 December 2024 (UTC)
- Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... JoelleJay (talk) 22:07, 7 December 2024 (UTC)
- I’m not going to waste time debating someone who resorts to claiming people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. Sincerely, Dilettante 16:13, 7 December 2024 (UTC)
- "Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. The whole point is to disregard comments that have been found to be AI-generated; it is not exclusively up to the closer to identify those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, their !vote still can't be disregarded for that reason alone, which means the burden is still on other editors to prove that the content is false. We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of generative AI, as the proposal lays out very explicitly. JoelleJay (talk) 02:13, 7 December 2024 (UTC)
- I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "eternal September" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. isaacl (talk) 19:41, 6 December 2024 (UTC)
- Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)
- Support a broad bar against undisclosed LLM-generated comments and even a policy that undisclosed LLM-generated comments could be sanctionable, in addition to struck through / redacted / ignored; people using them for accessibility / translation reasons could just disclose that somewhere (even on their user page would be fine, as long as they're all right with some scrutiny as to whether they're actually using it for a legitimate purpose.) The fact is that LLM comments raise significant risk of abuse, and often the fact that a comment is clearly LLM-generated is often going to be the only evidence of that abuse. I wouldn't be opposed to a more narrowly-tailored ban on using LLMs in any sort of automated way, but I feel a broader ban may be the only practical way to confront the problem. That said, I'd oppose the use of tools to detect LLM-comments, at least as the primary evidence; those tools are themselves unreliable LLM things. It should rest more on WP:DUCK issues and behavioral patterns that make it clear that someone is abusing LLMs. --Aquillion (talk) 22:08, 4 December 2024 (UTC)
- Support per reasons discussed above; something generated by an LLM is not truly the editor's opinion. On an unrelated note, have we seen any LLM-powered unapproved bots come in and do things like POV-pushing and spam page creation without human intervention? If we haven't, I think it's only a matter of time. Passengerpigeon (talk) 23:23, 4 December 2024 (UTC)
- Weak oppose in the sense that I don't think all LLM discussion text should be deleted. There are at least a few ESL users who use LLM's for assistance but try to check the results as best they can before posting, and I don't think their comments should be removed indiscriminately. What I do support (although not as a formal WP:PAG) is being much more liberal in hatting LLM comments when the prompter has failed to prevent WP:WALLOFTEXT/irrelevant/incomprehensible output than we maybe would for human-generated text of that nature. Mach61 03:05, 5 December 2024 (UTC)
- Oppose Any comments made by any editors are of their own responsibility and representing their own chosen opinions to hit the Publish Changes button on. If that comment was made by an LLM, then whatever it says is something the editor supports. I see no reason whatsoever to collapse anything claimed to be made by an LLM (whose detectors are 100% not reliable in the first place). If the comment being made is irrelevant to the discussion, then hatting it is already something covered by policy in the first place. This does make me want to start my comments with "As a large language model trained by OpenAI" though just to mess with people trying to push these sorts of policy discussions. SilverserenC 05:29, 5 December 2024 (UTC)
- Or, as ChatGPT puts it,
Why banning LLM usage in comments would be detrimental, a ChatGPT treatise
|
---|
|
- I'm honestly a bit impressed with the little guy. SilverserenC 05:39, 5 December 2024 (UTC)
- It is somewhat amusing how easy it is to get these chatbots to output apologia for these chatbots. Too bad it's always so shallow. Probably because the people who inserted those canned responses are shallow people is my opinion. Simonm223 (talk) 19:44, 6 December 2024 (UTC)
- I'm honestly a bit impressed with the little guy. SilverserenC 05:39, 5 December 2024 (UTC)
- Support those who are opposing have clearly never had to deal with trolls who endlessly WP:SEALION. If I wanted to have a discussion with a chatbot, I'd go and find one. ~~ AirshipJungleman29 (talk) 13:14, 5 December 2024 (UTC)
- What's wrong with just banning and hatting the troll? Aaron Liu (talk) 13:49, 5 December 2024 (UTC)
- Someone trolling and sealioning can (and should) be blocked under current policy, whether they use an LLM or not is irrelevant. Thryduulf (talk) 15:22, 5 December 2024 (UTC)
- Oppose per Rhododendrites. This is a case-by-case behavioral issue, and using LLMs != being a troll. Frostly (talk) 17:30, 5 December 2024 (UTC)
- Support: the general principle is sound - where the substance has been originally written by gen-AI, comments will tend to add nothing to the discussion and even annoy or confuse other users. In principle, we should not allow such tools to be used in discussions. Comments written originally before improvement or correction by AI, particularly translation assistants, fall into a different category. Those are fine. There also has to be a high standard for comment removal. Suspicion that gen-AI might have been used is not enough. High gptzero scores is not enough. The principle should go into policy but under a stonking great caveat - WP:AGF takes precedence and a dim view will be taken of generative-AI inquisitors. arcticocean ■ 17:37, 5 December 2024 (UTC)
- Support If a human didn't write it, humans shouldn't spend time reading it. I'll go further and say that LLMs are inherently unethical technology and, consequently, people who rely on them should be made to feel bad. ESL editors who use LLMs to make themselves sound like Brad Anderson in middle management should stop doing that because it actually gets in the way of clear communication. I find myself unpersuaded by arguments that existing policies and guidelines are adequate here. Sometimes, one needs a linkable statement that applies directly to the circumstances at hand. By analogy, one could argue that we don't really need WP:BLP, for example, because adhering to WP:V, WP:NPOV, and WP:NOR ought already to keep bad material out of biographies of living people. But in practice, it turned out that having a specialized policy that emphasizes the general ethos of the others while tailoring them to the problem at hand is a good thing. XOR'easter (talk) 18:27, 5 December 2024 (UTC)
- Strong support - Making a computer generate believable gibberish for you is a waste of time, and tricking someone else into reading it should be a blockable offense. If we're trying to create an encyclopedia, you cannot automate any part of the thinking. We can automate processes in general, but any attempt at automating the actual discussion or thought-processes should never be allowed. If we allow this, it would waste countless hours of community time dealing with inane discussions, sockpuppetry, and disruption. Imagine a world where LLMs are allowed and popular - it's a sockpuppeteer's dream scenario - you can run 10 accounts and argue the same points, and the reason why they all sound alike is just merely because they're all LLM users. You could even just spend a few dollars a month and run 20-30 accounts to automatically disrupt wikipedia discussions while you sleep, and if LLM usage was allowed, it would be very hard to stop. However, I don't have much faith in AI detection tools (partially because it's based on the same underlying flawed technology), and would want any assumption of LLM usage to be based on obvious evidence, not just a score on some website. Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop BugGhost 🦗👻 19:15, 5 December 2024 (UTC)
- I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should really know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. Dronebogus (talk) 21:54, 5 December 2024 (UTC)
- I also concur. "I used the machine for generating endless quantities of misleading text to generate more text" is not a good joke. XOR'easter (talk) 22:46, 5 December 2024 (UTC)
- I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should really know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. Dronebogus (talk) 21:54, 5 December 2024 (UTC)
- Strong support if you asked a robot to spew out some AI slop to win an argument you’re basically cheating. The only ethical reason to do so is because you can’t speak English well, and the extremely obvious answer to that is “if you can barely speak English why are you editing English Wikipedia?” That’s like a person who doesn’t understand basic physics trying to explain the second law of thermodynamics using a chatbot. Dronebogus (talk) 21:32, 5 December 2024 (UTC)
- I don't think "cheating" is a relevant issue here. Cheating is a problem if you use a LLM to win and get a job, award, college acceptance etc. that you otherwise wouldn't deserve. But WP discussions aren't a debating-skills contest, they're an attempt to determine the best course of action.
- So using an AI tool in a WP discussion is not cheating (though there may be other problems), just as riding a bike instead of walking isn't cheating unless you're trying to win a race. ypn^2 22:36, 5 December 2024 (UTC)
- Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people trying to gain a material edge in an argument. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. Dronebogus (talk) 08:08, 6 December 2024 (UTC)
- Bludgeoning is already against policy. It needs to be more strongly enforced, but it needs to be more strongly enforced uniformly rather than singling out comments that somebody suspects might have had AI-involvement. Thryduulf (talk) 10:39, 6 December 2024 (UTC)
- Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people trying to gain a material edge in an argument. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. Dronebogus (talk) 08:08, 6 December 2024 (UTC)
- Support; I agree with Remsense and jlwoodwa, among others: I wouldn't make any one AI-detection site the Sole Final Arbiter of whether a comment "counts", but I agree it should be expressly legitimate to discount AI / LLM slop, at the very least to the same extent as closers are already expected to discount other insubstantial or inauthentic comments (like if a sock- or meat-puppet copy-pastes a comment written for them off-wiki, as there was at least one discussion and IIRC ArbCom case about recently). -sche (talk) 22:10, 5 December 2024 (UTC)
- You don't need a new policy that does nothing but duplicate a subset of existing policy. At most what you need is to add a sentence to the existing policy that states "this includes comments written using LLMs", however you'd rightly get a lot of pushback on that because it's completely redundant and frankly goes without saying. Thryduulf (talk) 23:37, 5 December 2024 (UTC)
- Support hallucinations are real. We should be taking a harder line against LLM generated participation. I don't think everyone who is doing it knows that they need to stop. Andre🚐 23:47, 5 December 2024 (UTC)
- Comment - Here is something that I imagine we will see more often. I wonder where it fits into this discussion. A user employs perplexity's RAG based system, search+LLM, to help generate their edit request (without the verbosity bias that is common when people don't tell LLMs how much output they want). Sean.hoyland (talk) 03:13, 6 December 2024 (UTC)
- Support per all above. Discussions are supposed to include the original arguments/positions/statements/etc of editors here, not off-site chatbots. The Kip (contribs) 03:53, 6 December 2024 (UTC)
- I also find it pretty funny that ChatGPT itself said it shouldn't be used, as per the premise posted above by EEng. The Kip (contribs) 03:58, 6 December 2024 (UTC)
- "sycophancy is a general behavior of state-of-the-art AI assistants, likely driven in part by human preference judgments favoring sycophantic responses" - Towards Understanding Sycophancy in Language Models. They give us what we want...apparently. And just like with people, there is position bias, so the order of things can matter. Sean.hoyland (talk) 04:26, 6 December 2024 (UTC)
- I also find it pretty funny that ChatGPT itself said it shouldn't be used, as per the premise posted above by EEng. The Kip (contribs) 03:58, 6 December 2024 (UTC)
- (Is this where I respond? If not, please move.) LLM-generated prose should be discounted. Sometimes there will be a discernible point in there; it may even be what the editor meant, lightly brushed up with what ChatGPT thinks is appropriate style. (So I wouldn't say "banned and punishable" in discussions, although we already deprecate machine translations on en.wiki and for article prose, same difference—never worth the risk.) However, LLMs don't think. They can't explain with reference to appropriate policy and guidelines. They may invent stuff, or use the wrong words—at AN recently, an editor accused another of "defaming" and "sacrilege", thus drowning their point that they thought that editor was being too hard on their group by putting their signature to an outrageous personal attack. I consider that an instance of LLM use letting them down. If it's not obvious that it is LLM use, then the question doesn't arise, right? Nobody is arguing for requiring perfect English. That isn't what WP:CIR means. English is a global language, and presumably for that reason, many editors on en.wiki are not native speakers, and those that aren't (and those that are!) display a wide range of ability in the language. Gnomes do a lot of fixing of spelling, punctuation and grammar in articles. In practice, we don't have a high bar to entrance in terms of English ability (although I think a lot more could be done to explain to new editors whose English is obviously non-native what the rule or way of doing things is that they have violated. And some of our best writers are non-native; a point that should be emphasised because we all have a right of anonymity here, many of us use it, and it's rare, in particular, that I know an editor's race. Or even nationality (which may not be the same as where they live.) But what we do here is write in English: both articles and discussions. If someone doesn't have the confidence to write their own remark or !vote, then they shouldn't participate in discussions; I strongly suspect that it is indeed a matter of confidence, of wanting to ensure the English is impeccable. LLMs don't work that way, really. They concoct things like essays based on what others have written. Advice to use them in a context like a Wikipedia discussion is bad advice. At best it suggests you let the LLM decide which way to !vote. If you have something to say, say it and if necessary people will ask a question for clarification (or disagree with you). They won't mock your English (I hope! Civility is a basic rule here!) It happens in pretty much every discussion that somebody makes an English error. No biggie. I'll stop there before I make any more typos myself; typing laboriously on my laptop in a healthcare facility, and anyway Murphy's Law covers this. Yngvadottir (talk)
- I dunno about this specifically but I want to chime in to say that I find LLM-generated messages super fucking rude and unhelpful and support efforts to discourage them. – Joe (talk) 08:15, 6 December 2024 (UTC)
- Comment I think obvious LLM/chatbot text should at least be tagged through an Edit filter for Recent Changes, then RC Patrollers and reviewers can have a look and decide for themselves. A♭m (Ring!) (Notes) 11:58, 6 December 2024 (UTC)
- How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. Thryduulf (talk) 12:47, 6 December 2024 (UTC)
- It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. Sean.hoyland (talk) 13:17, 6 December 2024 (UTC)
- It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). Thryduulf (talk) 13:30, 6 December 2024 (UTC)
- Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? Sean.hoyland (talk) 14:38, 6 December 2024 (UTC)
- I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{LLM}} but self-imposed, and not tolerate unmarked usage. BugGhost 🦗👻 18:21, 6 December 2024 (UTC)
- They will have to work at some point (e.g. [8][9]). Sean.hoyland (talk) 06:27, 7 December 2024 (UTC)
- I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{LLM}} but self-imposed, and not tolerate unmarked usage. BugGhost 🦗👻 18:21, 6 December 2024 (UTC)
- Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? Sean.hoyland (talk) 14:38, 6 December 2024 (UTC)
- It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). Thryduulf (talk) 13:30, 6 December 2024 (UTC)
- It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. Sean.hoyland (talk) 13:17, 6 December 2024 (UTC)
- Good news! Queen of Hearts is already working on that in 1325. jlwoodwa (talk) 16:12, 6 December 2024 (UTC)
- How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. Thryduulf (talk) 12:47, 6 December 2024 (UTC)
- Comment As a practical matter, users posting obvious LLM-generated content will typically be in violation of other rules (e.g. disruptive editing, sealioning), in which case their discussion comments absolutely should be ignored, discouraged, discounted, or (in severe cases) hatted. But a smaller group of users (e.g. people using LLMs as a translation tool) may be contributing productively, and we should seek to engage with, rather than discourage, them. So I don't see the need for a separate bright-line policy that risks erasing the need for discernment — in most cases, a friendly reply to the user's first LLM-like post (perhaps mentioning WP:LLM, which isn't a policy or guideline, but is nevertheless good advice) will be the right approach to work out what's really going on. Preimage (talk) 15:53, 6 December 2024 (UTC)
- Yeah, this is why I disagree with the BLP analogy above. There's no great risk/emergency to ban the discernment. Aaron Liu (talk) 17:34, 6 December 2024 (UTC)
- Those pesky sealion Chatbots are just the worst! Martinevans123 (talk) 18:41, 6 December 2024 (UTC)
- Some translation tools have LLM assistance, but the whole point of generative models is to create text far beyond what is found in the user's input, and the latter is clearly what this proposal covers. JoelleJay (talk) 19:01, 6 December 2024 (UTC)
- That might be what the proposal intends to cover, but it is not what the proposal actually covers. The proposal all comments that have been generated by LLMs and/or AI, without qualification. Thryduulf (talk) 01:05, 7 December 2024 (UTC)
- 70+% here understand the intention matches the language: generated by LLMs etc means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually create meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is not true for text generation. JoelleJay (talk) 04:01, 7 December 2024 (UTC)
- Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. Thryduulf (talk) 08:34, 7 December 2024 (UTC)
- I think @Bugghost has summarized the respondent positions sufficiently below. I also think some portion of the opposers understand the proposal perfectly well and are just opposing anything that imposes participation standards. JoelleJay (talk) 22:54, 7 December 2024 (UTC)
- There will be many cases where it is not possible to say whether a piece of text does or does not contain "human thought" by observing the text, even if you know it was generated by an LLM. Statements like "originated through generative AI tools rather than human thought" will miss a large class of use cases, a class that will probably grow over the coming years. People work with LLMs to produce the output they require. It is often an iterative process by necessity because people and models make mistakes. An example of when "...rather than human thought" is not the case is when someone works with an LLM to solve something like a challenging technical problem where neither the person or the model has a satisfactory solution to hand. The context window means that, just like with human collaborators, a user can iterate towards a solution through dialog and testing, exploring the right part of the solution space. Human thought is not absent in these cases, it is present in the output, the result of a collaborative process. In these cases, something "far beyond what is found in the user's input" is the objective, it seems like a legitimate objective, but regardless, it will happen, and we won't be able to see it happening. Sean.hoyland (talk) 10:46, 7 December 2024 (UTC)
- Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, have to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to not formally proscribe wholesale comment generation. JoelleJay (talk) 22:27, 7 December 2024 (UTC)
- That's reasonable. I can get behind the idea of handling obvious cases from a noise reduction perspective. But for me, the issue is noise swamping signal in discussions rather than how it was generated. I'm not sure we need a special rule for LLMs, maybe just a better way to implement the existing rules. Sean.hoyland (talk) 04:14, 8 December 2024 (UTC)
- Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, have to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to not formally proscribe wholesale comment generation. JoelleJay (talk) 22:27, 7 December 2024 (UTC)
- Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. Thryduulf (talk) 08:34, 7 December 2024 (UTC)
- 70+% here understand the intention matches the language: generated by LLMs etc means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually create meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is not true for text generation. JoelleJay (talk) 04:01, 7 December 2024 (UTC)
- That might be what the proposal intends to cover, but it is not what the proposal actually covers. The proposal all comments that have been generated by LLMs and/or AI, without qualification. Thryduulf (talk) 01:05, 7 December 2024 (UTC)
- Support "I Am Not A ChatBot; I Am A Free Wikipedia Editor!" Martinevans123 (talk) 18:30, 6 December 2024 (UTC)
- Comment: The original question was whether we should discount, ignore, strikethrough, or collapse chatbot-written content. I think there's a very big difference between these options, but most support !voters haven't mentioned which one(s) they support. That might make judging the consensus nearly impossible; as of now, supporters are the clear !majority, but supporters of what? — ypn^2 19:32, 6 December 2024 (UTC)
- That means that supporters support the proposal
that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner
. Not sure what the problem is here. Supporters support the things listed in the proposal - we don't need a prescribed 100% strict procedure, it just says that supporters would be happy with closers discounting, ignoring or under some circumstances deleting LLM content in discussions. BugGhost 🦗👻 19:40, 6 December 2024 (UTC) - Doing something? At least the stage could be set for a follow on discussion. Selfstudier (talk) 19:40, 6 December 2024 (UTC)
- More people have bolded "support" than other options, but very few of them have even attempted to refute the arguments against (and most that have attempted have done little more than handwaving or directly contradicting themselves), and multiple of those who have bolded "support" do not actually support what has been proposed when you read their comment. It's clear to me there is not going to be a consensus for anything other than "many editors dislike the idea of LLMs" from this discussion. Thryduulf (talk) 00:58, 7 December 2024 (UTC)
- Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. BugGhost 🦗👻 09:00, 7 December 2024 (UTC)
- If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. Thryduulf (talk) 09:05, 7 December 2024 (UTC)
- Ok, here's a list of the main opposition reasonings, with individual responses.
- What about translations? - Translations are not up for debate here, the topic here is very clearly generative AI, and attempts to say that this topic covers translations as well is incorrect. No support voters have said the propositions should discount translated text, just oppose voters who are trying to muddy the waters.
- What about accessibility? - This is could be a legitimate argument, but I haven't seen this substantiated anywhere other than handwaving "AI could help people!" arguments, which I would lump into the spelling and grammar argument I responded to above.
- Detection tools are inaccurate - This I very much agree with, and noted in my support and in many others as well. But there is no clause in the actual proposal wording that mandates the use of automated AI detection, and I assume the closer would note that.
- False positives - Any rule can have a potential for false positives, from wp:DUCK to close paraphrasing to NPA. We've just got to as a community become skilled at identifying genuine cases, just like we do for every other rule.
- LLM content should be taken at face value and see if it violates some other policy - hopelessly naive stance, and a massive timesink. Anyone who has had the misfortune of going on X/twitter in the last couple of years should know that AI is not just used as an aid for those who have trouble typing, it is mainly used to spam and disrupt discussion to fake opinions to astroturf political opinions. Anyone who knows how bad the sockpuppetry issue is around CTOPs should be absolutely terrified of when (not if) someone decides to launch a full throated wave of AI bots on Wikipedia discussions, because if we have to invididually sanction each one like a human then admins will literally have no time for anything else.
- I genuinely cannot comprehend how some people could see how AI is decimating the internet through spam, bots and disinformation and still think for even one second that we should open the door to it. BugGhost 🦗👻 10:08, 7 December 2024 (UTC)
- There is no door. This is true for sockpuppetry too in my opinion. There can be a rule that claims there is a door, but it is more like a bead curtain. Sean.hoyland (talk) 11:00, 7 December 2024 (UTC)
- The Twitter stuff is not a good comparison here. Spam is already nukable on sight, mass disruptive bot edits are also nukable on sight, and it's unclear how static comments on Wikipedia would be the best venue to astroturf political opinions (most of which would be off-topic anyway, i.e., nukable on sight). I'd prefer if people didn't use ChatGPT to formulate their points, but if they're trying to formulate a real point then that isn't disruptive in the same way spam is. Gnomingstuff (talk) 02:22, 10 December 2024 (UTC)
it's unclear how static comments on Wikipedia would be the best venue to astroturf political opinions
- by disrupting RFCs and talk page discussions a bad actor could definitely use chatgpt to astroturf. A large proportion of the world uses Wikipedia (directly or indirectly) to get information - it would be incredibly valuable thing to manipulate. My other point is that AI disruption bots (like the ones on twitter) would be indistinguishable from individuals using LLMs to "fix" spelling and grammar - by allowing one we make the other incredibly difficult to identify. How can you tell the difference between a bot and someone who just uses chatgpt for every comment? BugGhost 🦗👻 09:16, 10 December 2024 (UTC)- You can't. That's the point. This is kind of the whole idea of WP:AGF. Gnomingstuff (talk) 20:22, 13 December 2024 (UTC)
Social anxiety: Say "I" am a person unconfident in my writing. I imagine that when I post my raw language, I embarrass myself, and my credibility vanishes, while in the worst case nobody understands what I mean. As bad confidence is often built up through negative feedback, it's usually meritful or was meritful at some point for someone to seek outside help. Aaron Liu (talk) 23:46, 8 December 2024 (UTC)Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first.
- While I sympathise with that hypothetical, Wikipedia isn't therapy and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. BugGhost 🦗👻 18:45, 10 December 2024 (UTC)
- We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. Thryduulf (talk) 19:06, 10 December 2024 (UTC)
The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project
- the same could be said for WP:DUCK. The reason why its not a big problem for DUCK is because the confidence level is very high. Like I've said in multiple other comments, I don't think "AI detectors" should be trusted, and that the bar for deciding whether something was created via LLM should be very high. I 100% understand your opinion and the reasoning behind it, I just think we have differing views on how well the community at large can identify AI comments. BugGhost 🦗👻 09:07, 11 December 2024 (UTC)
- I don't see how allowing shy yet avid users to contribute has done or will do long-term harm. The potential always outweighs rational evaluation of outcomes for those with anxiety, a condition that is not behaviorally disruptive. Aaron Liu (talk) 02:47, 11 December 2024 (UTC)
- I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. BugGhost 🦗👻 08:52, 11 December 2024 (UTC)
- Did you mean "... disallowed"? If so, I think we weigh-differently accessibility vs the quite low amount of AI trolling. Aaron Liu (talk) 14:10, 11 December 2024 (UTC)
- I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. BugGhost 🦗👻 08:52, 11 December 2024 (UTC)
- We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. Thryduulf (talk) 19:06, 10 December 2024 (UTC)
- While I sympathise with that hypothetical, Wikipedia isn't therapy and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. BugGhost 🦗👻 18:45, 10 December 2024 (UTC)
- If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. Thryduulf (talk) 09:05, 7 December 2024 (UTC)
- Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. BugGhost 🦗👻 09:00, 7 December 2024 (UTC)
- That means that supporters support the proposal
- Support strikethroughing or collapsing per everyone else. The opposes that mention ESL have my sympathy, but I am not sure how many of them are ESL themselves. Having learnt English as my second language, I have always found it easier to communicate when users are expressing things in their own way, not polished by some AI. I sympathise with the concerns and believe the right solution is to lower our community standards with respect to WP:CIR and similar (in terms of ESL communication) without risking hallucinations by AI. Soni (talk) 02:52, 7 December 2024 (UTC)
- Oppose the use of AI detection tools. False positive rates for AI-detection are dramatically higher for non-native English speakers. AI detection tools had a 5.1% false positive rate for human-written text from native English speakers, but human-written text from non-native English speakers had a 61.3% false positive rate. ~ F4U (talk • they/it) 17:53, 8 December 2024 (UTC)
- Oppose - I'm sympathetic to concerns of abuse through automated mass-commenting, but this policy looks too black-and-white. Contributors may use LLMs for many reasons, including to fix the grammar, to convey their thoughts more clearly, or to adjust the tone for a more constructive discussion. As it stands, this policy may lead to dismissing good-faith AI-assisted comments, as well as false positives, without considering the context. Moreover, while mainstream chatbots are not designed to just mimic the human writing style, there are existing tools that can make AI-generated text more human-like, so this policy does not offer that much protection against maliciously automated contributions. Alenoach (talk) 01:12, 9 December 2024 (UTC)
- Oppose – Others have cast doubt on the efficacy of tools capable of diagnosing LLM output, and I can't vouch for its being otherwise. If EEng's example of ChatBot output is representative—a lengthy assertion of notability without citing sources—that is something that could well be disregarded whether it came from a bot or not. If used carefully, AI can be useful as an aide-memoire (such as with a spell- or grammar-checker) or as a supplier of more felicitous expression than the editor is naturally capable of (e.g. Google Translate). Dhtwiki (talk) 10:27, 9 December 2024 (UTC)
- Comment / Oppose as written. It's not accurate that GPTZero is good at detecting AI-generated content. Citations (slightly out of date but there's little reason to think things have changed from 2023): https://www.aiweirdness.com/writing-like-a-robot/ , https://www.aiweirdness.com/dont-use-ai-detectors-for-anything-important/ . For those too busy to read, a few choice quotes: "the fact that it insisted even one [real book] excerpt is not by a human means that it's useless for detecting AI-generated text," and "Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased" (citing https://arxiv.org/abs/2304.02819 ). Disruptive, worthless content can already be hatted, and I'm not opposed to doing so. Editors should be sharply told to use their own words, and if not already written, an essay saying we'd rather have authentic if grammatically imperfect comments than AI-modulated ones would be helpful to cite at editors who offer up AI slop. But someone merely citing GPTZero is not convincing. GPTZero will almost surely misidentify genuine commentary as AI-generated. So fine with any sort of reminder that worthless content can be hatted, and fine with a reminder not to use ChatGPT for creating Wikipedia talk page posts, but not fine with any recommendations of LLM-detectors. SnowFire (talk) 20:00, 9 December 2024 (UTC)
- @SnowFire, I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). JoelleJay (talk) 01:47, 11 December 2024 (UTC)
- I support the proposal in so far as disruptive comments can already be hatted and that LLM-generated content is disruptive. I am strongly opposed to giving well-meaning but misguided editors a license to throw everyone's text into an AI-detector and hat the comments that score poorly. I don't think it was that minor a detail, and to the extent that detail is brought up, it should be as a reminder to use human judgment and forbid using alleged "AI detectors" instead. SnowFire (talk) 03:49, 11 December 2024 (UTC)
- @SnowFire, I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). JoelleJay (talk) 01:47, 11 December 2024 (UTC)
- Support collapsing AI (specifically, Large language model) comments by behavioral analysis (most actually disruptive cases I've seen are pretty obvious) and not the use of inaccurate tools like ZeroGPT. I thinking hatting with the title "Editors suspect that this comment has been written by a Large language model" is appropriate. They take up SO much space in a discussion because they are also unnecessarily verbose, and talk on and on but never ever say something that even approaches having substance. Discussions are for human Wikipedia editors, we shouldn't have to use to sift through comments someone put 0 effort into and outsourced to a robot that writes using random numbers (that's a major part of how tools like ChatGPT work and maintain variety). If someone needs to use an AI chatbot to communicate because they don't understand English, then they are welcome to contribute to their native language Wikipedia, but I don't think they have the right to insist that we at enwiki spend our effort reading comments they but minimal effort into besides opening the ChatGPT website. If really needed, they can write in their native language and use a non-LLM tool like Google Translate. The use of non-LLM tools like Grammarly, Google Translate, etc. I think should still be OK for all editors, as they only work off comments that editors have written themselves. MolecularPilot 🧪️✈️ 05:10, 10 December 2024 (UTC)
- Adding that enforcing people writing things in their own words will actually help EAL (English additional language) editors contribute here. I world with EAL people irl, and even people who have almost native proficiency with human-written content find AI output confusing because it says things in the most confusing, verbose ways using difficult sentence constructions and words. I've seen opposers in this discussion who maybe haven't had experience working with EAL people go "what about EAL people?", but really, I think this change will help them (open to being corrected by someone who is EAL, tho). MolecularPilot 🧪️✈️ 05:17, 10 December 2024 (UTC)
- Also, with regards to oppose comments that discussions are not a vote so closes will ignore AI statements which don't have merit - unedited LLM statements are incredibly verbose and annoying, and clog up the discussion. Imagine multiple paragraphs, each with a heading, but all of which say almost nothing, they're borderline WP:BLUGEONy. Giving the power to HAT them will help genuine discussion contributors keep with the flow of human arguments and avoid scaring away potential discussion contributors who are intimidated or don't feel they have the time to read the piles of AI nonsense that fill the discussion. MolecularPilot 🧪️✈️ 06:38, 10 December 2024 (UTC)
- Support (removing) in general. How is this even a question? There is no case-by-case. It is a fundamental misunderstanding of how LLMs work to consider their output reliable without careful review. And which point, the editor could have written it themselves without inherent LLM bias. The point of any discussion is to provide analytical response based on the context, not have some tool regurgitate something from a training set that sounds good. And frankly, it is disrespectuful to make someone read "AI" responses. It is a tool and there is a place and time for it, but not in discussions in an encyclopedia. — HELLKNOWZ ∣ TALK 15:41, 10 December 2024 (UTC)
- Strong Support. I'm very interested in what you (the generic you) have to say about something. I'm not remotely interested in what a computer has to say about something. It provides no value to the discussion and is a waste of time. Useight (talk) 18:06, 10 December 2024 (UTC)
- Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. Thryduulf (talk) 18:25, 10 December 2024 (UTC)
- Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have WP:NOTHERE for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. Useight (talk) 21:54, 11 December 2024 (UTC)
- There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. Thryduulf (talk) 22:03, 11 December 2024 (UTC)
- To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of WP:NOTHERE, and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. Useight (talk) 23:26, 11 December 2024 (UTC)
- You are entitled to your opinion of course, but as it is contradicted by the evidence of both multiple constructive uses and of the near-impossibility of reliably detecting LLM-generated text without false positives, I would expect the closer of this discussion to attach almost no weight to it. Thryduulf (talk) 00:42, 12 December 2024 (UTC)
- I am ESL and use LLMs sometimes because of that. I feel like I don't fit into the NOTHERE category. It seems like you do not understand what they are or how they can be used constructively. PackMecEng (talk) 01:43, 12 December 2024 (UTC)
- No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. Useight (talk) 15:59, 12 December 2024 (UTC)
- It seems very likely that there will be valid use cases for AI-written content if the objective is maximizing quality and minimizing errors. Research like this demonstrate that there will likely be cases where machines outperform humans in specific Wikipedia domains, and soon. But I think that is an entirely different question than potential misuse of LLMs in consensus related discussions. Sean.hoyland (talk) 16:25, 12 December 2024 (UTC)
- But your vote and the proposed above makes not distinction there. Which is the main issue. Also not to be pedantic but every prompted to a LLM is filled out by a human looking to convey a message. Every time someone hits publish on something here it is that person confirming that is what they are saying. So how do we in practice implement what you suggest? Because without a method better than vibes it's worthless. PackMecEng (talk) 18:53, 12 December 2024 (UTC)
- The proposal specifies content generated by LLMs, which has a specific meaning in the context of generative AI. If a prompt itself conveys a meaningful, supported opinion, why not just post that instead? The problem comes when the LLM adds more information than was provided, which is the whole point of generative models. JoelleJay (talk) 01:52, 13 December 2024 (UTC)
- No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. Useight (talk) 15:59, 12 December 2024 (UTC)
- To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of WP:NOTHERE, and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. Useight (talk) 23:26, 11 December 2024 (UTC)
- There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. Thryduulf (talk) 22:03, 11 December 2024 (UTC)
- Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have WP:NOTHERE for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. Useight (talk) 21:54, 11 December 2024 (UTC)
- Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. Thryduulf (talk) 18:25, 10 December 2024 (UTC)
- Yes in principle. But in practice, LLM detectors are not foolproof, and there are valid reasons to sometimes use an LLM, for example to copyedit. I have used Grammarly before and have even used the Microsoft Editor, and while they aren't powered by LLMs, LLMs are a tool that need to be used appropriately on Wikipedia. Awesome Aasim 19:55, 10 December 2024 (UTC)
- Support. Using LLM to reply to editors is lazy and disrespectful of fellow editor's time and brainpower. In the context of AFD, it is particularly egregious since an LLM can't really read the article, read sources, or follow our notability guidelines. By the way.
gptzero and other such tools are very good at detecting this
. I don't think this is correct at all. I believe the false positive for AI detectors is quite high. High enough that I would recommend not using AI detectors. –Novem Linguae (talk) 03:23, 11 December 2024 (UTC) - Question @Just Step Sideways: Since there appears to be a clear consensus against the AI-detectors part, would you like to strike that from the background? Aaron Liu (talk) 14:10, 11 December 2024 (UTC)
- Support. AI generated text should be removed outright. If you aren't willing to put the work into doing your own writing then you definitely haven't actually thought deeply about the matter at hand. User1042💬✒️ 14:16, 11 December 2024 (UTC)
- This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. Thryduulf (talk) 14:26, 11 December 2024 (UTC)
- Thryduulf I don't think this reply is particular helpful, and it comes off as slightly combative. It's also by my count your 24th comment on this RFC. BugGhost 🦗👻 19:20, 11 December 2024 (UTC)
- This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. Thryduulf (talk) 14:26, 11 December 2024 (UTC)
- Oppose @Just Step Sideways: The nomination's 2nd para run through https://www.zerogpt.com/ gives "11.39% AI GPT*":
The nomination's linked https://gptzero.me/ site previously advertised https://undetectable.ai/ , wherewith how will we deal? Imagine the nomination was at AFD. What should be the response to LLM accusations against the highlighted sentence? 172.97.141.219 (talk) 17:41, 11 December 2024 (UTC)I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner
- Support with the caveat that our ability to deal with the issue goes only as far as we can accurately identify the issue (this appears to have been an issue raised across a number of the previous comments, both support and oppose, but I think it bears restating because we're approaching this from a number of different angles and its IMO the most important point regardless of what conclusions you draw from it). Horse Eye's Back (talk) 19:24, 11 December 2024 (UTC)
- Strong support, limited implementation.
Wikipedia is written by volunteer editors
, says our front page. This is who we are, and our writing is what Wikipedia is. It's true that LLM-created text can be difficult to identify, so this may be a bit of a moving target, and we should be conservative in what we remove—but I'm sure at this point we've all run across cases (whether here or elsewhere in our digital lives) where someone copy/pastes some text that includes "Is there anything else I can help you with?" at the end, or other blatant tells. This content should be deleted without hesitation. Retswerb (talk) 04:11, 12 December 2024 (UTC) - Support in concept, questions over implementation — I concur with Dronebogus that users who rely on LLMs should not edit English Wikipedia. It is not a significant barrier for users to use other means of communication, including online translators, rather than artificial intelligence. How can an artificial intelligence tool argue properly? However, I question how this will work in practice without an unacceptable degree of error. elijahpepe@wikipedia (he/him) 22:39, 12 December 2024 (UTC)
- Many, possibly most, online translators use artificial intelligence based on LLMs these days. Thryduulf (talk) 22:46, 12 December 2024 (UTC)
- There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. elijahpepe@wikipedia (he/him) 22:59, 12 December 2024 (UTC)
- Neither your comment nor the original proposal make any such distinction. Thryduulf (talk) 23:34, 12 December 2024 (UTC)
- Well since people keep bringing this up as a semi-strawman: no I don’t support banning machine translation, not that I encourage using it (once again, if you aren’t competent in English please don’t edit here) Dronebogus (talk) 07:34, 13 December 2024 (UTC)
- Neither your comment nor the original proposal make any such distinction. Thryduulf (talk) 23:34, 12 December 2024 (UTC)
- There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. elijahpepe@wikipedia (he/him) 22:59, 12 December 2024 (UTC)
- LLMs are incredible at translating, and many online translators already incorporate them, including Google Translate. Accomodating LLMs is an easy way to support the avid not only the ESL but also the avid but shy. It has way more benefits than the unseen-to-me amount of AI trolling that isn't already collapse-on-sight. Aaron Liu (talk) 00:05, 13 December 2024 (UTC)
- Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "adaptive translation" API, but it does not incorporate LLMs into translating your everyday text input, which still relies on NMTs. And even for the API features, the core constraint of matching input rather than generating content is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the generative part that is key to this proposal is not present. JoelleJay (talk) 01:20, 13 December 2024 (UTC)
PaLM (Pathways Language Model) is a 540 billion-parameter transformer-based large language model (LLM) developed by Google AI.[1]
If you meant something about how reschlmunking the outputs of an LLM or using quite similar architecture is not really incorporating the LLM, I believe we would be approaching Ship of Theseus levels of recombination, to which my answer is it is the same ship.
That happens! Aaron Liu (talk) 01:29, 13 December 2024 (UTC)obviously it would be very bad for a translation tool to insert material not found in the original text!
- PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. JoelleJay (talk) 01:34, 13 December 2024 (UTC)
- Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. Aaron Liu (talk) 02:01, 13 December 2024 (UTC)
- Google Translate's NMT hits different enough to speak English much less naturally than ChatGPT 4o. I don't consider it a LLM, because the param count is 380M not 1.8T.
the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be
No, that def would fit ancient RNN tech too. 172.97.141.219 (talk) 17:50, 13 December 2024 (UTC)- Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. Aaron Liu (talk) 20:44, 13 December 2024 (UTC)
- NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, not to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still not generative when it comes to translation output. JoelleJay (talk) 22:29, 13 December 2024 (UTC)
- How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? Aaron Liu (talk) 23:30, 13 December 2024 (UTC)
- The proposal is for hatting obvious cases of LLM-generated comments. Someone who just uses an LLM to copyedit will still have written the content themselves and presumably their output would not have the obvious tells of generative AI. JoelleJay (talk) 23:56, 13 December 2024 (UTC)
- How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? Aaron Liu (talk) 23:30, 13 December 2024 (UTC)
- NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, not to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still not generative when it comes to translation output. JoelleJay (talk) 22:29, 13 December 2024 (UTC)
- Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. Aaron Liu (talk) 20:44, 13 December 2024 (UTC)
- Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. Aaron Liu (talk) 02:01, 13 December 2024 (UTC)
- PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. JoelleJay (talk) 01:34, 13 December 2024 (UTC)
- Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "adaptive translation" API, but it does not incorporate LLMs into translating your everyday text input, which still relies on NMTs. And even for the API features, the core constraint of matching input rather than generating content is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the generative part that is key to this proposal is not present. JoelleJay (talk) 01:20, 13 December 2024 (UTC)
- Not when I tried to use it. Quantitatively, GPTZero went from 15% human to 100% AI for me despite the copyedits only changing 14 words. Aaron Liu (talk) 00:33, 14 December 2024 (UTC)
- I think there is consensus that GPTZero is not usable, even for obvious cases. JoelleJay (talk) 00:55, 14 December 2024 (UTC)
- Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. Aaron Liu (talk) 01:18, 14 December 2024 (UTC)
- Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. jlwoodwa (talk) 05:26, 14 December 2024 (UTC)
- My subjective evaluation of what I got copyediting from ChatGPT was that it sounded like ChatGPT. I used GPTZero to get a number. Aaron Liu (talk) 14:18, 14 December 2024 (UTC)
- Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. jlwoodwa (talk) 05:26, 14 December 2024 (UTC)
- Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. Aaron Liu (talk) 01:18, 14 December 2024 (UTC)
- I think there is consensus that GPTZero is not usable, even for obvious cases. JoelleJay (talk) 00:55, 14 December 2024 (UTC)
- On one hand, AI slop is a plague on humanity and obvious LLM output should definitely be disregarded when evaluating consensus. On the other hand, I feel like existing policy covers this just fine, and any experienced closer will lend greater weight to actual policy-based arguments, and discount anything that is just parroting jargon. WindTempos they (talk • contribs) 23:21, 12 December 2024 (UTC)
- Support in principle, but we cannot rely on any specific tools because none are accurate enough for our needs. Whenever I see a blatant ChatGPT-generated !vote, I ignore it. They're invariably poorly reasoned and based on surface-level concepts rather than anything specific to the issue being discussed. If someone is using AI to create their arguments for them, it means they have no actual argument besides WP:ILIKEIT and are looking for arguments that support their desired result rather than coming up with a result based on the merits. Also, toasters do not get to have an opinion. The WordsmithTalk to me 05:17, 13 December 2024 (UTC)
Alternate proposal
[edit]Whereas many editors, including me, have cited problems with accuracy in regards to existing tools such as ZeroGPT, I propose that remarks that are blatently generated by a LLM or similar automated system should be discounted/removed/collapsed/hidden. ThatIPEditor They / Them 10:00, 10 December 2024 (UTC)
- Oppose as completely unnecessary and far too prone to error per the above discussion. Any comment that is good (on topic, relevant, etc) should be considered by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is bad (off-topic, irrelevant, etc) should be ignored by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is both bad and disruptive (e.g. by being excessively long, completely irrelevant, bludgeoning, etc) should be removed and/or hatted as appropriate, regardless of whether it was made with LLM-input of any sort. The good thing is that this is already policy so we don't need to call out LLMs specifically, and indeed doing so is likely to be disruptive in cases where human-written comments are misidentified as being LLM-written (which will happen, regardless of whether tools are used). Thryduulf (talk) 11:19, 10 December 2024 (UTC)
- I think this proposal is not really necessary. I support it, but that is because it is functionally identical to the one directly above it, which I also supported. This should probably be hatted. BugGhost 🦗👻 18:32, 10 December 2024 (UTC)
- What does blatantly generated mean? Does you mean only where the remark is signed with "I, Chatbot", or anything that appears to be LLM-style? I don't think there's much in between. ypn^2 19:21, 10 December 2024 (UTC)
- Procedural close per BugGhost. I'd hat this myself, but I don't think that'd be appropriate since it's only the two of us who have expressed that this proposal is basically an exact clone. Aaron Liu (talk) 03:00, 11 December 2024 (UTC)
I wonder, if there any wiki-volunteers, who have appeals experience, and who would be willing to stand up for the Neutral Point of View Pillar of Wikipedia.
[edit]
I was banned from editing a specific topic after I stood up for WP:NPV . I do not really care much about the topic, but I care about Wiki-Policies, and I feel compelled to defend WP:NPV , when it is violated by wiki-administrators. Usually, when you go to a court/appeal court in the USA, you can get a free counselor, who helps you with the process. I wonder, if there any wiki-volunteers, who have appeals experience, and who would be willing to stand up for the 2nd of the Five Pillars - Neutral Point of View.Walter Tau (talk) 23:16, 4 December 2024 (UTC)
A short description of the case can be found here: https://en.wikipedia.org/w/index.php?title=Talk:Russian_invasion_of_Ukraine&action=edit§ion=6 — Preceding unsigned comment added by Walter Tau (talk • contribs) Analysis of the causes and results of the Russo-Ukrainian War by [[Political science| political scientists I claim, that the article as written violates Wikipedia:Neutral point of view policy= means representing fairly, proportionately, and, as far as possible, without editorial bias, ALL the significant views that have been published by reliable sources on a topic. Please note, that I do not insist on adding anything about Douglas Macgregor's and Scott Ritter's views (although I support others, if they want to write about them), but I cannot disregard John Mearsheimer, Stephen Walt and several other political scientists. I shall start with addressing the statement by Manyareasexpert on 2024-11-26T10:35:23 : “undo back to consensus version - objections raised in talk, edit war”. Let’s talk about the consensus first. Here is a citation from the Talk Page for Russian invasion of Ukraine on ca. 31 October 2024 (UTC): — Preceding unsigned comment added by Walter Tau (talk • contribs) 19:39, December 4, 2024 (UTC)
|
Guidance on illustrative use of AV, especially readings and subtitling
[edit]Hi there,
[EDIT: this is specifically requested regarding use of AV for illustrative, rather than sourcing, purposes. Compare MOS:IMAGES; there is no similar guidance for illustrative audio-video content.]
From a couple of recent conversations I think that MOS could do with a bit more guidance on the use of audio and video content. I know policy development can be difficult and tedious, so I don't say this lightly, but I have encountered some situations where guidance would be beneficial.
A option would be to amend MOS:IMAGES so explain that most of the guidance also applies to illustrative uses of Audio visual content.
Specifically:
- Where a media file is used, as a recording of an original source, what are the verification requirements? For example, if someone recorded a song, does it need comparison to the original score? How far should it deviate?
- What are the "aesthetic" considerations?
- If AV needs subtitling or translation, which is preferred? Translations once recorded, for example, are very hard to edit or correct compared to subtitles.
- How do we cater for users' needs and preferences? Subtitling seems a good way to go.
- Are there benefits to hearing the original for the user, even where they do not speak the language? Where might these occur more (eg, in literary or poetical works, hearing the original is especially useful)
- Are there preferences on audio-video length? eg, are shorter clips generally preferable, and links preferred to long form content?
Although the answers seem fairly obvious to me, I've found there is not always understanding or consensus on these points. I think some of this may be cultural - in particular many EN speakers are resistant to foreign language content, and thus to original language content where that is not English. Other elements are UX matters which is again not always obvious at a glance. Discussion and guidance might help find the right criteria and balance for assessment. Jim Killock (talk) 12:27, 6 December 2024 (UTC)
- It's longstanding policy that sources don't have to be in English but where possible English translations should be provided. Therefore subtitles seem like the policy-compliant option. Where you link to long-format media, provide a timestamp in your link which points to the part that directly supports the claim you're making.—S Marshall T/C 13:09, 6 December 2024 (UTC)
- Thank you; and apologies for not being more precise, I've edited the comment and title above to be clearer about what kind of guidance I think is missing, which is regarding illustrative usages rather than citation. Jim Killock (talk) 13:39, 6 December 2024 (UTC)
- I added a comment at MOS:IMAGES talk page. Jim Killock (talk) 17:57, 6 December 2024 (UTC)
Citations in anthroponymy lists
[edit]- User:Alalch E. has changed this section's title from the already descriptive "Removing sources", because this user disagrees how it describes the user's edits. – Editør (talk) 11:50, 8 December 2024 (UTC)
A user removed source references from a list in the good article Femke, which seems like vandalism to me. Can someone perhaps weigh in on the talk page discussion before this turns into an edit war? – Editør (talk) 02:52, 8 December 2024 (UTC)
- VPP is a good place to discuss the following portion of the widely-followed Wikipedia:WikiProject Anthroponymy/Standards, evidenced in the fact that anthroponymy lists (a type of WP:SIA, but functionally and style-wise often very similar to a dab), do not have a citation next to each entry, and that your idea to add these citations, however justified and obvious of an improvement it may seem to you, is a new idea that may not seem equally justified to everyone else ... said portion is:
Entries have certain limitations to promote consistency and usability. Not unlike Disambiguation pages, Names articles may contain lists of persons and each entry should follow a particular format.
Entries should not include External links. References are not required since the article that the entry is linked to should include citations.
- Instead of weighing in on whether to call me a vandal and forecasts of edit warring, let us conduct a review of WikiProject Anthroponymy's WP:ADVICEPAGE. —Alalch E. 10:39, 8 December 2024 (UTC)
- It's definitely not vandalism. But, Alalch E, the fact that references "aren't required" doesn't mean they're banned. I think you should let people add sources if they want.—S Marshall T/C 11:13, 8 December 2024 (UTC)
- I agree that it is not vandalism according to Wikipedia:Vandalism, but I believe @Alalch E. shows intential disruptive behaviour, including changing the heading of this post, which I have now changed back so I will against receive notification of new comments. – Editør (talk) 11:21, 8 December 2024 (UTC)
- You don't own section headings. I have changed the heading back to a descriptive heading. Stop that please. See WP:SECTIONHEADINGOWN. —Alalch E. 11:24, 8 December 2024 (UTC)
- Please stop your intentionally disruptive editing! – Editør (talk) 11:27, 8 December 2024 (UTC)
- Please take a short break from this topic of something like an hour to get some perspective. You have started from an assumption of bad faith and are seeming less and less reasonable by the minute. Kindly let a few more editors weigh in. Nothing here is urgent. —Alalch E. 11:28, 8 December 2024 (UTC)
- In addition to your "Removing sources" from the article Femke, you have reverted my edits to that article, made changes to my post here, and made changes to my comments on two talk pages. This is disruptive behaviour, even if it is not intentional. Please stop this immediately. – Editør (talk) 11:36, 8 December 2024 (UTC)
- Have you read the portions of the guidelines that I have linked in response to your attempts to enforce talk page headings and to determine the level of sections on my talk page? From the beginning of this dispute last night, you seem unusually distrustful, and more and more bent on enforcing your view of how things should be, even details that you have no control of, such as my talk page. Please step back to get a little perspective and let a few more editors weigh in. —Alalch E. 11:40, 8 December 2024 (UTC)
- With your changes to this section's heading you are effectively trying to change how I am describing your disruptive behaviour here and what I am asking help for. – Editør (talk) 11:46, 8 December 2024 (UTC)
- See the header of this page:
The policy section of the village pump is used to discuss already-proposed policies and guidelines and to discuss changes to existing policies and guidelines. Change discussions often start on other pages and then move or get mentioned here for more visibility and broader participation
(emphasis mine). If you want to discuss my purportedly disruptive behavior, you should perhaps start a section at WP:ANI. But since you have started a section here already, perhaps do not start too many discussions in quick sequence. —Alalch E. 11:50, 8 December 2024 (UTC)- Please stop trying to control my comments. – Editør (talk) 11:52, 8 December 2024 (UTC)
- That's not a reasonable remark. What do you think about my already made observation that you are seeming less and less reasonable by the minute? —Alalch E. 11:55, 8 December 2024 (UTC)
- Please stop trying to control my comments. – Editør (talk) 11:52, 8 December 2024 (UTC)
- See the header of this page:
- With your changes to this section's heading you are effectively trying to change how I am describing your disruptive behaviour here and what I am asking help for. – Editør (talk) 11:46, 8 December 2024 (UTC)
- Have you read the portions of the guidelines that I have linked in response to your attempts to enforce talk page headings and to determine the level of sections on my talk page? From the beginning of this dispute last night, you seem unusually distrustful, and more and more bent on enforcing your view of how things should be, even details that you have no control of, such as my talk page. Please step back to get a little perspective and let a few more editors weigh in. —Alalch E. 11:40, 8 December 2024 (UTC)
- In addition to your "Removing sources" from the article Femke, you have reverted my edits to that article, made changes to my post here, and made changes to my comments on two talk pages. This is disruptive behaviour, even if it is not intentional. Please stop this immediately. – Editør (talk) 11:36, 8 December 2024 (UTC)
- Please take a short break from this topic of something like an hour to get some perspective. You have started from an assumption of bad faith and are seeming less and less reasonable by the minute. Kindly let a few more editors weigh in. Nothing here is urgent. —Alalch E. 11:28, 8 December 2024 (UTC)
- Please stop your intentionally disruptive editing! – Editør (talk) 11:27, 8 December 2024 (UTC)
- You don't own section headings. I have changed the heading back to a descriptive heading. Stop that please. See WP:SECTIONHEADINGOWN. —Alalch E. 11:24, 8 December 2024 (UTC)
- @S Marshall: Even though WP:SETNOTDAB applies, anthro lists are probably the most dab-like of all lists, and their entries are intentionally styled the same as dab page entries because these lists and disambiguation pages are closely interlinked, and for a reader who wants a seamless experience of browsing for a person and/or exploring names, the appearance should be consistent. Take a look at List of people named James for example. —Alalch E. 11:23, 8 December 2024 (UTC)
- Alalch, I think that this dispute puts the disputed content over the (rather low) threshold for "challenged or likely to be challenged" within the meaning of WP:V. I think core content policy trumps "seamless" or "consistent appearance". I hope that you will consider allowing Editør to add his citations, and I also hope you will reflect on whether you ought to be editing someone else's words to retitle this VPP thread.—S Marshall T/C 13:14, 8 December 2024 (UTC)
- The original title was "Removing citations": a discussion of one editor's actions which should be at ANI if anywhere. The current title "Citations in Anthroponymy lists" reflects the fact that the discussion is about policy: whether references should be included for blue-linked name-holder-list entries in Anthroponymy articles. On the one hand we have an article failed for GA because of an uncited list; on the other hand we have the standards of the Anthroponymy project which do not include such references. PamD 13:23, 8 December 2024 (UTC)
- Alalch, I think that this dispute puts the disputed content over the (rather low) threshold for "challenged or likely to be challenged" within the meaning of WP:V. I think core content policy trumps "seamless" or "consistent appearance". I hope that you will consider allowing Editør to add his citations, and I also hope you will reflect on whether you ought to be editing someone else's words to retitle this VPP thread.—S Marshall T/C 13:14, 8 December 2024 (UTC)
- I agree that it is not vandalism according to Wikipedia:Vandalism, but I believe @Alalch E. shows intential disruptive behaviour, including changing the heading of this post, which I have now changed back so I will against receive notification of new comments. – Editør (talk) 11:21, 8 December 2024 (UTC)
- This discussion follows a discussion at Talk:Tamara (given name)#List of names removal, where an editor was keen to remove the uncited list of name-holders (without creating a free-standing list, just removing them from the encyclopedia) so that the article might reach Good Article status. The article had been quick-failed for Good Article by @Voorts: on grounds including
The notable people and fictional character sections require citations for each of the entries.
I pointed out there that there are no single-name Anthroponymy Featured Articles to use as models, but that the three Good Articles included one with an uncited list of given-name holders (Femke), one with a link to a free-standing uncited list of name-holders, and one with a fully cited list of name-holders, all of whom were red links. That may have drawn attention to Femke and inspired an editor to add sources to all its name-holders. - I do not think that references are needed in lists of name-holders in anthroponymy articles, where the information about the person is limited to name, dates and description based on the lead sentence of their article. Such unnecessary references clutter the article and should be avoided. If there needs to be an amendment to the standards followed for GA review, then this should be done, to avoid further disagreements. PamD 13:08, 8 December 2024 (UTC)
- I do not see how references at the end of lines clutter an article. GA reviews don't have specific rules for certain types of articles, but in general an entirely unsourced section is a likely cause for pause for a reviewer. CMD (talk) 13:17, 8 December 2024 (UTC)
- Like a lot of other places where we do say "references are not required" (for example, in the case of plot summaries), removing references that actually do work to support the content should not be removed. "not required" is not the same as "not allowed". Whether references should be kept or use is a talk page issue to debate but an editor should not go around removing references without consensus just because they are "not required". --Masem (t) 13:27, 8 December 2024 (UTC)
- (after edit conflict) I don't see any need to require citations for such lists. I also don't see any point in removing them if someone has gone to the trouble of providing them, but it is not vandalism. Surely we can cope with some minor inconsistencies between articles? Phil Bridger (talk) 13:30, 8 December 2024 (UTC)
- I argue that despite anthro lists specifically not being dab pages, they are functionally the closest thing to a dab page and are intentionally styled to look like one (MOS:DABPEOPLE:
... only enough descriptive information that the reader can distinguish between different people with the same name
) because of very close interrelatedness to dab pages (the difference is highly technical and imperceptible to a reader, who will seamlessly go from a people dab to an anthro list and back not knowing that they have visited different types of Wikipedia article space pages), and the age-old practice has been that MOS:DABNOLINK applies to such dab-equivalent entries (References should not appear on disambiguation pages. Dab pages are not articles; instead, incorporate the references into the target articles.
). Not spelled out anywhere and recorded as "not required" in WP:APO/S, but in evident practice, the references are not just not required, they are unwanted. The article is better without them as the experience for the reader is better without them. —Alalch E. 14:13, 8 December 2024 (UTC)- I agree. I'm actually not convinced that lists of given-name holders are particularly worthwhile, but lists of surname holders are vital. As well as possibly helping those interested in the surname in itself, they help the much more common reader who finds a reference to "Bloggs' earlier work on the topic" or "X was influenced by Davies" and needs to scan a list of surname-holders to find the person, forename and initials unknown, who is being referred to. Dates and field of occupation are important - an 18th-century botanist may be the answer where a 20th-century tennis player is not. These lists need to be as complete as possible, to help the reader.
- If we go down the path where some editors add references to these lists, then we might move seamlessly along a path of references being "expected", not least for consistency in those articles, and newly-added unsourced entries being criticised, tagged, and perhaps routinely deleted as "unsourced BLP" by enthusiastic editors. Inevitably names will be added without references, but other editors will just stop bothering to add a name to a surname list because finding a single ref, or a small number, which elegantly sources all of their dates, nationality and occupation (or occupations) may be non-trivial. The reader would lose out.
- So I argue that adding references to name-holder lists is positively unhelpful, and removing such refs is useful.
- The time spent in adding such references could so much better be spent in improving genuinely unsourced or under-referenced articles: it's alarming to think that this might become a "favourite editing job", or even a "nice simple job recommended for novice editors". PamD 16:11, 8 December 2024 (UTC)
- I want to note that I'm fine removing references, despite my QF of the Tamara GA. I was applying the GA criteria and guidance at SIA, which says that citations are required if information beyond a wikilink is provided. I also wasn't aware of that part of the WikiProject Anthroponymy standards at the time. If there's consensus that these kinds of lists don't need citations, that's fine with me. Adopting this rule might affect whether these articles are eligible for FLC (see below) or GA/FA. voorts (talk/contributions) 19:00, 8 December 2024 (UTC)
- I argue that despite anthro lists specifically not being dab pages, they are functionally the closest thing to a dab page and are intentionally styled to look like one (MOS:DABPEOPLE:
- (ec) I can see an argument for not citing bluelinked namehavers in anthroponymy lists. What guides the choice of source? In the removal diff linked in the OP, I'm seeing a lot of citations to sources that establish the existence of various Femkes. Especially for the athletes, there's no indication from these sources why the Femke attested is notable.In the diff, Femke Verstichelen is cited to https://www.uci.org/rider-details/94895, which provides her nationality, birthdate, sanctions (none), and two entries for Team History. This is a database entry that does nothing to establish notability, and accordingly isn't used as a reference in her article (it's an external link).Again in the diff, Femke Van den Driessche is supported by the source https://olympics.com/en/athletes/femke-van-den-driessche, the content of which reads in full "Cycling
<br />
Year of birth 1996". This source – another database record – doesn't even establish nationality, and isn't linked from the subject's article at all.I haven't clicked through to many of these, but the impression I'm getting is that the sources aren't really valuable here. I'm not trying to argue that bluelinks in anthroponymy lists have to establish notability in the list rather than just in the target article, but if we're going to bother adding citations for these people, why not make them informative and relevant? It's just a wasted clickthrough if a reader navigates to these database records instead of visiting the target article.In general I do feel like lists of this type are disimproved by citations. If citations must be added by choice to anthroponymy lists like this, I feel the least worst solution would be to bundle them all into a single pair of<ref>...</ref>
tags following the introductory sentence, which would make the section much easier to edit and significantly reduce bloat to the==References==
section. Folly Mox (talk) 16:13, 8 December 2024 (UTC)
I have added sources for the list of name bearers in the article Femke, because birth years and professions are sometimes listed wrongly and can be challenged. Therefore the sources are required by the Wikipedia:Good article criteria, specifically criterion #2b that states "reliable sources are cited inline. All content that could reasonably be challenged, except for plot summaries and that which summarizes cited content elsewhere in the article, must be cited no later than the end of the paragraph (or line if the content is not in prose)". So good articles should never rely on sources not cited inside the article itself. And removing sources because it is an editor's opinion they don't look nice goes against the good article criteria and against Wikipedia's core principle of verifiability. Sourcing lists of people isn't unusual as it is also common practice for articles like Births in 2000. However, as far as I'm concerned, sourcing lists doesn't need to be demanded for all lists of name bearers in articles about given names, but it should at the very least be accepted. – Editør (talk) 16:48, 8 December 2024 (UTC)
- @Hey man im josh: I believe you pointed out to me that given name articles probably shouldn't go through GA to begin with since SIAs are lists. Is that still your view? voorts (talk/contributions) 18:36, 8 December 2024 (UTC)
- I have mixed feelings on it, but I have generally felt that the name articles are often more akin to lists, depending on how many entries and the depth of the information on the name itself is. Hey man im josh (talk) 18:52, 8 December 2024 (UTC)
- Given name articles are sometimes just one sentence or paragraph with a list of names that looks like a disambiguation page. I tried to develop one given name article further and show that it can even be a good article where the list of names is just one section. I hoped that it could be an example to inspire others to improve given name articles as well. So some are set index articles, but others just have set index sections ({{given name}} using the
section=y
parameter). And in some cases the list is split off, such as the long List of people named David for David (name). There are simply different solutions possible that suit different names. – Editør (talk) 20:27, 8 December 2024 (UTC)
- Given name articles are sometimes just one sentence or paragraph with a list of names that looks like a disambiguation page. I tried to develop one given name article further and show that it can even be a good article where the list of names is just one section. I hoped that it could be an example to inspire others to improve given name articles as well. So some are set index articles, but others just have set index sections ({{given name}} using the
- I have mixed feelings on it, but I have generally felt that the name articles are often more akin to lists, depending on how many entries and the depth of the information on the name itself is. Hey man im josh (talk) 18:52, 8 December 2024 (UTC)
Should first language be included in the infobox for historical figures?
[edit]Is there a guideline concerning this? "Infobox royalty" apparently has this parameter, but I haven't found a single article that actually uses it. Many articles don't mention the subject's spoken languages at all. In my view, somebody's first language (L1) is just a very basic and useful piece of information, especially for historical figures. This would be helpful in cases where the ruling elites spoke a completely different language from the rest of the country (e.g., High Medieval England or early Qing dynasty China). These things are not always obvious to readers who are unfamiliar with the topic. Including it would be a nice and easy way to demonstrate historical language shifts that otherwise might be overlooked. Perhaps it could also bring visibility to historical linguistic diversity and language groups that have since disappeared. Where there are multiple first languages, they could all be listed. And in cases where a person's first language remains unclear, it could simply be left out. Kalapulla123 (talk) 11:53, 8 December 2024 (UTC)
- I don't think I agree this is a good use of infobox space:However, this is just my opinion, and the venue of discussion should probably be Wikipedia talk:WikiProject Royalty and Nobility or similar, rather than VPP. Folly Mox (talk) 12:02, 9 December 2024 (UTC)
- incongruences between elite spoken languages and popular spoken languages can't be shown with a single parameter (the language spoken by the oppressed would have to be included as well)
- for many people this would be unverifiable (already mentioned in OP) and / or contentious (people living during a language transition)
- sometimes L2 skills will be more than adequate to communicate with subject population when called for
- in cases where the subject's L1 matches their polity's (i.e. most cases), the parameter would feel like unnecessary clutter
- prose description seems adequate
- I think this might be sufficiently important pretty much exclusively for writers where the language they wrote in is not the "obvious" one for their nationality. Johnbod (talk) 12:43, 9 December 2024 (UTC)
- It might also be important for politicians (and similar figures?) in countries where language is a politically-important subject, e.g. Belgium. Thryduulf (talk) 16:29, 9 December 2024 (UTC)
- This seems like a bad idea. Let's take a case where language spoken by a royal was very relevant: Charles V, Holy Roman Emperor. When he became King of Castile as a teenager, he only really spoke Flemish and didn't speak Castilian Spanish, and needless to say trusted the advisors he could actually talk with (i.e. Flemish / Dutch ones he brought with him). He also then immediately skipped out of Castile to go to proto-Germany to be elected Holy Roman Emperor. This ended up causing a rebellion (Revolt of the Comuneros) which was at least partially justified by Castilian nationalism, and partially by annoyed Castilian elites who wanted cushy government jobs. So language-of-royal was relevant. But... the Infobox is for the person as a whole. Charles came back to Castile and spent a stretch of 10 years there and eventually learned rather good Castilian and largely assuaged the elite, at least. He was king of Spain for forty years. So it would seem rather petty to harp on the fact his first language wasn't Castilian in the Infobox, when he certainly did speak it later and through most of his reign, even if not his first few years when he was still basically a kid. SnowFire (talk) 19:47, 9 December 2024 (UTC)
- See below on this. Johnbod (talk) 14:26, 11 December 2024 (UTC)
- SnowFire's fascinating anecdote shows that this information is not appropriate for infoboxes but rather should be described in prose in the body of the article where the subtleties can be explained to the readers. Cullen328 (talk) 19:56, 9 December 2024 (UTC)
- No, it shows that it's not appropriate for that infobox, and therefore that it is not suitable for all infoboxes where it is plausibly relevant. It shows nothing about whether it is or is not appropriate for other infoboxes: the plural of anecdote is not data. Thryduulf (talk) 21:08, 9 December 2024 (UTC)
- But it kind of is here? I picked this example as maybe one of the most obviously relevant cases. Most royals failing to speak the right language don't have this trait linked with a literal war in reliable sources! But if inclusion of this piece of information in an Infobox is still problematic in this case, how could it possibly be relevant in the 99.9% cases of lesser importance? The Infobox isn't for every single true fact. SnowFire (talk) 21:53, 9 December 2024 (UTC)
- It isn't suitable for this infobox not because of a lack of importance, but because stating a single first language would be misleading. There exists the very real possibility of cases where it is both important and simple. Thryduulf (talk) 00:02, 10 December 2024 (UTC)
- Could you (or anyone else in favor of the proposal) identify 5 biographies where this information is both useful to readers and clearly backed by reliable sources? signed, Rosguill talk 15:06, 11 December 2024 (UTC)
- It isn't suitable for this infobox not because of a lack of importance, but because stating a single first language would be misleading. There exists the very real possibility of cases where it is both important and simple. Thryduulf (talk) 00:02, 10 December 2024 (UTC)
- But it kind of is here? I picked this example as maybe one of the most obviously relevant cases. Most royals failing to speak the right language don't have this trait linked with a literal war in reliable sources! But if inclusion of this piece of information in an Infobox is still problematic in this case, how could it possibly be relevant in the 99.9% cases of lesser importance? The Infobox isn't for every single true fact. SnowFire (talk) 21:53, 9 December 2024 (UTC)
- No, it shows that it's not appropriate for that infobox, and therefore that it is not suitable for all infoboxes where it is plausibly relevant. It shows nothing about whether it is or is not appropriate for other infoboxes: the plural of anecdote is not data. Thryduulf (talk) 21:08, 9 December 2024 (UTC)
- Charles V claimed to have spoken Italian to women, French to men, Spanish to God, and German to his horse. Hawkeye7 (discuss) 21:35, 9 December 2024 (UTC)
- Sorry, this is just nonsense! Charles V was raised speaking French, which was the language of his aunt's court, although in the Dutch-speaking Mechelen. All his personal letters use French. He only began to be taught Dutch when he was 14, & may never have been much good at it (or Spanish or German). Contrary to the famous anecdote, which is rather late and dubious ("Spanish to God....German to my horse") he seems to have been a rather poor linguist, which was indeed awkward at times. Johnbod (talk) 00:39, 10 December 2024 (UTC)
- (This is a bit off-topic, but "nonsense" is too harsh. I'm familiar that he spoke "French" too, yes, although my understanding was that he did speak "Flemish", i.e. the local Dutch-inflected speech, too? And neither 1500-era French nor Dutch were exactly standardized, so I left it as "Flemish" above for simplicity. If his Dutch was worse than I thought, sure, doesn't really affect the point made, though, which was that his Castilian was non-existent at first. As far as his later understanding of Spanish, his capacity was clearly enough - at the very least I've seen sources say he made it work and it was enough to stave off further discontent from the nobility. Take it up with the authors of the sources, not me.). SnowFire (talk) 16:23, 10 December 2024 (UTC)
- There's a difference between "simplicity" and just being wrong! You should try reading the sources, with which I have no issue. And his ministers were also either native Francophones, like Cardinal Granvelle and his father Nicolas Perrenot de Granvelle (both from Besançon, now in eastern France), or could speak it well; the Burgundian elite had been Francophone for a long time. The backwash from all this remains a somewhat sensitive issue in Belgium, even now. And Charles V was not "King of Spain" (a title he avoided using) for 40 years at all; only after his mother died in 1555 (a year before him) did he become unarguably King of Castile. Johnbod (talk) 14:26, 11 December 2024 (UTC)
- (This is a bit off-topic, but "nonsense" is too harsh. I'm familiar that he spoke "French" too, yes, although my understanding was that he did speak "Flemish", i.e. the local Dutch-inflected speech, too? And neither 1500-era French nor Dutch were exactly standardized, so I left it as "Flemish" above for simplicity. If his Dutch was worse than I thought, sure, doesn't really affect the point made, though, which was that his Castilian was non-existent at first. As far as his later understanding of Spanish, his capacity was clearly enough - at the very least I've seen sources say he made it work and it was enough to stave off further discontent from the nobility. Take it up with the authors of the sources, not me.). SnowFire (talk) 16:23, 10 December 2024 (UTC)
- It may not be appropriate for many articles, but it surely is for some. For example, when I told her that England had had kings whose first language was German, someone asked me the other day how many. It would be good to have a quick way of looking up the 18th century Georges to find out. Phil Bridger (talk) 21:20, 9 December 2024 (UTC)
- I think the problem is that people might make assumptions. I would check before saying that George I and George II spoke German as their first language and not French. Languages spoken is probably more useful than birth language, but the list might be incomplete. There is also competing information about George I, and he is an English King, so he has been better researched and documented compared to other historical figures.
- I agree that this is important when language is the basis of community identity, such as in Belgian. Tinynanorobots (talk) 10:38, 10 December 2024 (UTC)
- Ummmm… no. People I disagree with™️ use “infobox bloat” as a boogeyman in arguments about infoboxes. But this is infobox bloat. Even those celebrity/anime character things that tell you shoe size, pinky length and blood type wouldn’t include this. Dronebogus (talk) 18:16, 11 December 2024 (UTC)
- I don't think there needs to be any central policy on this. It could be relevant to include this information for someone, perhaps... maybe... However, infoboxes work best when they contain uncontroversial at-a-glance facts that don't need a bunch of nuance and context to understand. For the example of Charles V, maybe his first language is significant, but putting it in the infobox (where the accompanying story cannot fit) would be a confusing unexplained factoid. Like, maybe once upon a time there was a notable person whose life turned on the fact that they were left-handed. That could be a great bit of content for the main article, but putting handedness in the infobox would be odd. Barnards.tar.gz (talk) 14:33, 12 December 2024 (UTC)
- {{Infobox baseball biography}} includes handedness, and nobody finds that odd content for an infobox.
- {{infobox royalty}} includes the option for up to five native languages, though the OP says it seems to be unused in practice. {{Infobox writer}} has a
|language=
parameter, and it would be surprising if this were unused. WhatamIdoing (talk) 19:36, 12 December 2024 (UTC)- Baseball seems to be a good example of where handedness is routinely covered, and easily consumable at a glance without needing further explanation. The scenario where I don't think handedness (or first language) makes sense is when it is a uniquely interesting aspect of that individual's life, because almost by definition there's a story there which the infobox can't tell. Barnards.tar.gz (talk) 10:23, 13 December 2024 (UTC)
- I don't think L1 can be determined for most historical figures without a hefty dose of OR. If you look at my Babel boxes, you'll see that I, as a living human being with all the information about my own life, could not tell you what my own "L1" is. The historical figures for whom this would be relevant mostly spoke many more languages than I do, and without a time machine it would be nigh impossible to say which language they learned first. This isn't even clear for the Qing emperors – I am fairly certain that they all spoke (Mandarin) Chinese very well, and our article never says what language they spoke. Puyi even states that he never spoke Manchu. Adding this parameter would also inflame existing debates across the encyclopedia about ethnonationalism (e.g. Nicola Tesla) and infobox bloat. Toadspike [Talk] 21:21, 12 December 2024 (UTC)
- As with every bit of information in every infobox, if it cannot be reliably sourced it does not go in, regardless of how important it is or isn't. There are plenty of examples of people whose first language is reported in reliable sources, I just did an internal source for "first language was" and on the first page of results found sourced mentions of first language at Danny Driver, Cleopatra, Ruthanne Lum McCunn, Nina Fedoroff, Jason Derulo, Henry Taube and Tom Segev, and an unsourced but plausible mention at Dean Martin. The article strongly suggests that her first language is an important part of Cleopatra's biography such that putting it in the infobox would be justifiable. I am not familiar enough with any of the others to have an opinion on whether it merits an infobox mention there, I'm simply reporting that there are many articles where first language is reliably sourced and a mention is deemed DUE. Thryduulf (talk) 22:08, 12 December 2024 (UTC)
- I have been wondering since this conversation opened how far back the concept of an L1 language, or perhaps the most colloquial first language, can be pushed. Our article doesn't have anything on the history of the concept. CMD (talk) 11:31, 13 December 2024 (UTC)
- I suspect the concept is pretty ancient, I certainly wouldn't be surprised to learn it arose around the same time as diplomacy between groups of people with different first languages. The note about it at Cleopatra certainly suggests it was already a well-established concept in her era (1st century BCE). Thryduulf (talk) 13:23, 13 December 2024 (UTC)
- The concept of different social strata speaking different languages is old, but I'm not sure whether they viewed learning languages the same way we do. It's certainly possible, and perhaps it happened in some areas at some times, but I hesitate to assume it's the case for every historical person with an infobox. CMD (talk) 16:05, 13 December 2024 (UTC)
- It's certainly not going to be appropriate for the infobox of every historical person, as is true for (nearly?) every parameter. The questions here are whether it is appropriate in any cases, and if so in enough cases to justify having it as a parameter (how many is enough? I'd say a few dozen at minimum, ideally more). I think the answer the first question is "yes". The second question hasn't been answered yet, and I don't think we have enough information here yet to answer it. Thryduulf (talk) 21:54, 13 December 2024 (UTC)
- The concept of different social strata speaking different languages is old, but I'm not sure whether they viewed learning languages the same way we do. It's certainly possible, and perhaps it happened in some areas at some times, but I hesitate to assume it's the case for every historical person with an infobox. CMD (talk) 16:05, 13 December 2024 (UTC)
- I suspect the concept is pretty ancient, I certainly wouldn't be surprised to learn it arose around the same time as diplomacy between groups of people with different first languages. The note about it at Cleopatra certainly suggests it was already a well-established concept in her era (1st century BCE). Thryduulf (talk) 13:23, 13 December 2024 (UTC)
Restrict new users from crosswiki uploading files to Commons
[edit]I created this Phabricator ticket (phab:T370598) in July of this year, figuring that consensus to restrict non-confirmed users from crosswiki uploading files to Commons is implied. Well, consensus already agreed at Commons in response to the WMF study on crosswiki uploading. I created an attempted Wish at Meta-wiki, which was then rejected, i.e. "archived", as policy-related and requir[ing] alignment across various wikis to implement such a policy
. Now I'm starting this thread, thinking that the consensus here would already or implicitly support such restriction, but I can stand corrected about the outcome here. George Ho (talk) 06:34, 9 December 2024 (UTC); corrected, 08:10, 9 December 2024 (UTC)
- Support. I am not sure why this relies on alignment across wikis, those on Commons are best placed to know what is making it to Commons. The change would have little to no impact on en.wiki. If there is an impact, it would presumably be less cleaning up of presumably fair use files migrated to Commons that need to be fixed here. That said, if there needs to be consensus, then obviously support. We shouldn't need months of bureaucracy for this. CMD (talk) 06:41, 9 December 2024 (UTC)
- Support, I don't know that my input really counts as new consensus because I said this at the time, but the problem is much worse than what the study suggests as we are still finding spam, copyvios, unusable selfies and other speedy-deletable uploads from the timespan audited.
- Gnomingstuff (talk) 02:14, 10 December 2024 (UTC)
- Support As this applies to images being posted to Commons, but by a method that side steps their wishes, I don't see why another wiki should stand in the way. -- LCU ActivelyDisinterested «@» °∆t° 16:54, 10 December 2024 (UTC)
- Support. I do think that disabling the ability for new editors on the English Wikipedia from engaging in crosswiki uploads to Commons would be a net positive; the Commons community has come to this conclusion several times, and the research confirms that cross-wiki uploads by new users cause more trouble than the good uploads worth. — Red-tailed hawk (nest) 00:36, 11 December 2024 (UTC)
- Support Way too low signal-to-noise ratio; most of these images are copyvios or otherwise useless. -- King of ♥ ♦ ♣ ♠ 01:12, 11 December 2024 (UTC)
- Support like the above editors. Much spam, many copyvios, few good images.—Alalch E. 15:47, 11 December 2024 (UTC)
- I don't think this should be any sort of enwiki policy. If commonswiki wants to restrict something that should be up to them. I can't possibly see how it would need to be specific to the English Wikipedia (i.e. but not about new users on dewiki, eswikt, etc). — xaosflux Talk 16:19, 11 December 2024 (UTC)
- As noted by George Ho above, Commons has already done this for all wikis. The question is whether or not we want the English Wikipedia to assist in implementing this (perhaps by changing a local setting or software configuration to require that their uploads be local), rather than merely relying upon a Commons edit filter (which can be a bit unfriendly to new users). — Red-tailed hawk (nest) 19:50, 11 December 2024 (UTC)
- This comment interests me: "Interestingly, we found that most uploaders were either marketers (editing/uploading on behalf of another entity such as their employer), or they were self-promoters (creating pages about themselves, unaware of the "notability" requirement)."
- So I wonder whether, instead of stopping this, we want a bot to look at newbies who create articles/drafts, check whether they uploaded something, and then tag both the image(s) and the pages here with a note that says something like "There is a 90% chance that this has been posted by a marketer or self-promoter", with suitable links to pages such as Wikipedia:Paid-contribution disclosure. Or maybe even a WP:STICKYPROD process.
- On the question of what to do, it should be possible to hide the cross-wiki upload button. The real question is, do we replace it with a link to c:Special:UploadWizard? The Commons POV has been that it's bad for people to upload images within the visual editor, but okay for the same person to upload the same image with the UploadWizard. I'm not sure the net result is actually any different, especially for these marketers/self-promoters (in terms of net quality/acceptability; from Commons' POV, it's better because (a lot? a little?) fewer of them will click through to upload anything at Commons). WhatamIdoing (talk) 19:49, 12 December 2024 (UTC)
- As noted by George Ho above, Commons has already done this for all wikis. The question is whether or not we want the English Wikipedia to assist in implementing this (perhaps by changing a local setting or software configuration to require that their uploads be local), rather than merely relying upon a Commons edit filter (which can be a bit unfriendly to new users). — Red-tailed hawk (nest) 19:50, 11 December 2024 (UTC)
- Support Nearly every single thing I've ever put up for deletion at Commons has been stuff uploaded to spam en.wp. It never stops. Just Step Sideways from this world ..... today 19:55, 11 December 2024 (UTC)
- Is this still happening? According to @Red-tailed hawk this is already blocked. — xaosflux Talk 20:52, 11 December 2024 (UTC)
- Yes, it's still happening. Such uploads include these images from EnWiki; the edit filter, as currently implemented, only filters out images with certain characteristics. — Red-tailed hawk (nest) 21:05, 11 December 2024 (UTC)
- It is for sure still happening, I've nominated a few in just the past week. Just Step Sideways from this world ..... today 22:26, 11 December 2024 (UTC)
- It's still happening. A lot of them go to the uncategorized backlog which has well over 100,000 things in it so they get overlooked. Gnomingstuff (talk) 19:18, 12 December 2024 (UTC)
- If anyone wants to help with that, then click on c:Special:RandomInCategory/Category:All media needing categories as of 2018. Figure out what the image is (Google Lens or TinEye searches can help; go to c:Special:Preferences#mw-prefsection-gadgets and ⌘F for TinEye to find the right item). If you can identify it, then add a relevant cat. I believe that Wikipedia:HotCat is enabled by default for all logged-in editors, so searching for cats is usually pretty easy. If you can't find something obviously relevant, then skip it and try another. WhatamIdoing (talk) 20:02, 12 December 2024 (UTC)
- I got another one just now [10]. This really can't happen fast enough. Just Step Sideways from this world ..... today 23:51, 12 December 2024 (UTC)
- If anyone wants to help with that, then click on c:Special:RandomInCategory/Category:All media needing categories as of 2018. Figure out what the image is (Google Lens or TinEye searches can help; go to c:Special:Preferences#mw-prefsection-gadgets and ⌘F for TinEye to find the right item). If you can identify it, then add a relevant cat. I believe that Wikipedia:HotCat is enabled by default for all logged-in editors, so searching for cats is usually pretty easy. If you can't find something obviously relevant, then skip it and try another. WhatamIdoing (talk) 20:02, 12 December 2024 (UTC)
- Yes, it's still happening. Such uploads include these images from EnWiki; the edit filter, as currently implemented, only filters out images with certain characteristics. — Red-tailed hawk (nest) 21:05, 11 December 2024 (UTC)
- Is this still happening? According to @Red-tailed hawk this is already blocked. — xaosflux Talk 20:52, 11 December 2024 (UTC)
- Support It's honestly kinda dumb that we have to have this whole other consensus process after the prior one just because people at Meta-wiki don't want to implement it. SilverserenC 20:35, 13 December 2024 (UTC)
The Notability of Indian Universities
[edit]There is a need to better understand how the notability criteria works of Indian universities. Right now, we are looking at things like a university's rankings, research work, and its role in improving education. For academicians and vice chancellors, we consider things like research publications, fellowships, and leadership experience. However, in India, there is a big concern about the rise of educational institutions that claim to be non-profit but are run as businesses, with leadership often influenced by political connections or family ties. Also, most of these private universities including their vice chancellors' pages are just promotional, based on paid reporting in Indian news organizations, listing courses or publications, which breaks Wikipedia's WP:NOTDIRECTORY rule. They also rely heavily on rankings from multiple portals to boost their article's text. At the assessment level, there are two main opinions: one says a university is notable i.e, passes WP:GNG if it is approved by the University Grants Commission or set up by a state act or statute, while the other says universities must meet strict WP:NORG guidelines to have a Wikipedia article. Our goal is not to judge or oppose any institution. But, it is time to use different criteria to evaluate these organizations from India.
For greater clarity, please take a look at the following ongoing AfDs: Wikipedia:Articles_for_deletion/Adani_University and Wikipedia:Articles_for_deletion/Neotia_University
I am also inviting the following editors, who recently took part in the AfDs mentioned above, to join a helpful discussion: Pharaoh of the Wizards, Ratnahastin, GrabUp, Necrothesp, Sirfurboy, and CptViraj. -- Charlie (talk) 04:12, 10 December 2024 (UTC)
- WP:NSCHOOL is very clear on this :-
All universities, colleges and schools, including high schools, middle schools, primary (elementary) schools, and schools that only provide a support to mainstream education must satisfy either the notability guidelines for organizations (i.e., this page) or the general notability guideline.
(emphasis mine) - All universities whether they are Indian or not or if they have been established by a statute or not need to satisfy either the WP:NORG or WP:GNG in order to be considered notable. The rankings are merely routine coverage as they are released periodically. Also we cannot use WP:OUTCOMESBASED arguments to keep an article as it is simply a circular reasoning (i.e keep an article because we usually keep them at AfDs). I am not sure if we need a separate guideline or clause for indian universities in lieu of the fact that most Indian media coverage about any organisation is often sponsored without any disclosure per WP:NEWSORGINDIA & User:Ms Sarah Welch/sandbox/Paid news and private treaties. - Ratnahastin (talk) 04:26, 10 December 2024 (UTC)
- There is a line in the WP:SCHOOLOUTCOME:
Most independently accredited degree-awarding institutions have enough coverage to be notable, although that coverage may not be readily available online.
Should we really accept this as an argument thatMaybe there are offline sources, so Keep
—without citing any offline sources? GrabUp - Talk 04:35, 10 December 2024 (UTC)- We don't accept it. Per WP:SCHOOLOUTCOME is an argument to be avoided at AfD. That is just describing the situation generally, and does not create a presumption of notability. Sirfurboy🏄 (talk) 07:46, 10 December 2024 (UTC)
- Agree that we should never use outcome based arguments. What matters is the sourcing because how else can the page be written? In the main, I think the P&G is fine. These must meet NORG or GNG. But there is a difference. We allow public schools and non profits to meet GNG but private for-profit schools must meet NORG. As long as we do that, Charlie has raised a significant concern that Indian universities should probably be required to meet NORG when, on the face of it, they are non profits that only need to meet GNG. We have WP:NEWSORGINDIA. Do we need touch of guidance about these institutions? Also
in India, there is a big concern about the rise of educational institutions that claim to be non-profit but are run as businesses
- could we have some reference to these concerns, which we would need to justify such an additional guideline. Thanks. Sirfurboy🏄 (talk) 07:55, 10 December 2024 (UTC)- @Sirfurboy
- Here are few articles;
- 1. 2011 article: Large number of colleges are run by politicians, builders: V. Raghunathan
- 2. 2016 article: Private higher education is burgeoning in India – but millions can't afford it. There is a sentence in this article, "Private institutions keep the cost of education high, despite restrictions on generating profit."
- 3. 2018 article: Educational Institutions must earmark certain percentage of seats for poorer sections and subsidize their education: Vice President. There is a sentence in this article, "Calling for a complete overhaul of our education system, the Vice President said that majority of our colleges have become mere breeding centres for producing students with degree certificates rather than individuals with critical analytical skills."
- 4. 2021 article: 90% of India's students go to colleges where there is little research done: PSA VijayRagahvan
- CITEHIGHLIGHTER shows that some reliable sources include paid or sponsored news, sometimes disguised as ads.;
- 1. Business Standard: Bharath Institute of Higher Education and Research tops the list of Private Universities in India - Sponsored post
- 2. The Indian Express: Manipal University, Jaipur Admissions 2025: UG and PG Admissions, Eligibility and Selection process - Direct price list promotion.
- 3. ThePrint: Enhance Your Career with Manipal University’s Accredited Online Degree Programs
- 4. Business Standard: Ahmedabad University Inaugurates India's First MTech in Composites, Creating Pathways for Next Generation of Material Scientists. - Sponsored post.
- 5. The Hindu: Manav Rachna defines New Milestones | Becomes First Indian University to offer IB Educator Certificate in PYP, MYP and DP. - Sponsored post.
- 6. Business Standard: Shoolini Ranks No.1 Private University in India, Again. - Sponsored post.
- Also, it has been found some universities in India are gaming research publications;
- 1. Chemistry World: Are Indian higher education institutes gaming the ranking system?
- 2. ThePrint: India’s research crime is getting worse. Scientists are gaming peer review system
- 3. ThePrint: This Indian watchdog is cleaning up ‘mess’ in academia—falsification, fabrication & fraud
- Wikipedia is the only place on the internet where such entities try to gain legitimacy through the pseudo-promotion of their institutions. If we maintain basic vigilance, we can save many gullible parents and their children in India from being cheated. Charlie (talk) 12:58, 10 December 2024 (UTC)
- Paid news is ubiquitous in India, those that do not pay up are denied coverage. [11] - Ratnahastin (talk) 13:54, 10 December 2024 (UTC)
- @CharlieMehta, some of the complaints above have nothing to do with notability. Politicians have complained about the quality and price of education in every country. That has nothing to do with the guideline.
- Something that surprises some people is that 'non-profit' doesn't mean 'low cost' or 'poor' or even 'charitable'. Non-profit means that if expenses are lower than revenue, then nobody gets to pocket the profits as their own personal money. You can have a non-profit cigarette maker, or a non-profit gasoline producer. The difference is:
- For-profit: Spend $90 to make something (including your salary), sell it for $100, allowed (but not required) to take the $10 difference home for yourself.
- Non-profit: Spend $90 to make something (including your salary), sell it for $100, not allowed to take the $10 difference home for yourself.
- That's the only difference. These other things – the 'wrong' people are running them, the price is too high, the quality is too low – are completely irrelevant. WhatamIdoing (talk) 20:39, 12 December 2024 (UTC)
- @WhatamIdoing I intended to offer some perspective to the discussion in response to the question raised by Sirfurboy. At the same time, the points and clarifications you have provided are very helpful in steering the conversation back to the actual guidelines and criteria rather than focusing on subjective or extraneous factors. Charlie (talk) 08:47, 13 December 2024 (UTC)
- Paid news is ubiquitous in India, those that do not pay up are denied coverage. [11] - Ratnahastin (talk) 13:54, 10 December 2024 (UTC)
- Note WP:CONSENSUS. There is very definitely a consensus at AfD that fully accredited universities established by statute should be considered to be notable. I can't recall one being deleted. -- Necrothesp (talk) 08:36, 10 December 2024 (UTC)
- Where is the RFC that establishes this consensus? Is it in any policy or subject notability guidelines? What we recall is not always a reliable indication even of the consensus at our self selected engagement. For instance, you made the argument here [12] and the page was not kept. Sirfurboy🏄 (talk) 08:58, 10 December 2024 (UTC)
- There are examples where fully accredited universities were deleted via AfD or WP:CONSENSUS, such as Wikipedia:Articles for deletion/Sant Baba Bhag Singh University, which I recall as I participated in it. GrabUp - Talk 11:51, 10 December 2024 (UTC)
- @Ratnahastin, I don't think that "released periodically" is the definition of "routine coverage". WP:CORPDEPTH says "brief mentions and routine announcements". A report is not a "routine announcement", even if it happens periodically.
- Perhaps we should clarify the meaning of "routine" in the guideline. WhatamIdoing (talk) 20:27, 12 December 2024 (UTC)
- There is a line in the WP:SCHOOLOUTCOME:
- The only thing that should matter is whether there are multiple reliable independent secondary sources that provide significant coverage. That's what's necessary to write an article, and any attempts to get around this or ignore it should be discarded. Promotional and paid content do not meet the requirement of independence. Thebiguglyalien (talk) 02:40, 11 December 2024 (UTC)
- If I'm understanding CharlieMehta's post, I think the concerns are that we can't reliably identify paid news when it's coming out of India, even when it's not clearly marked as sponsored, so guidance clarifying/reminding editors of NEWSORGINDIA in the context of Indian schools might be warranted; that allegedly non-profit universities might actually be operating for profit, in which case the stronger source scrutiny required by NORG might be needed even for "public" universities; and that the often deplorable degree of research fraud, corruption, fake stats, and nepotism in regards to academic career advancement may mean NPROF's C6 guideline (VCs of major academic institutions are notable) is faulty when it comes to VCs of Indian universities. JoelleJay (talk) 03:19, 11 December 2024 (UTC)
While this doesn't fit into the tidy binary flow charts that we imagine, if it's a significant separate university facility it's tends to get a few brownie points in the evaluation for being a geographic entity. I think that a practical standard is that if it isn't a significant separate university facility, it should meet a strict interpretation of the NCORP GNG. And, given the "pay to get coverage" situation in India, what's in the source can help judge in the discussion whether it meets that standard. North8000 (talk) 20:56, 12 December 2024 (UTC)
Use of the status parameter in Infobox officeholder
[edit]For several weeks, editors involved in updating the infoboxes (Template:Infobox officeholder) on Trump's nominees have either supplied status information about a candidate's position within the title itself, e.g. Special:Permalink/1262197122, or through the status parameter, e.g. Special:Permalink/1262208196. This should be standardized. elijahpepe@wikipedia (he/him) 05:02, 10 December 2024 (UTC)
- It's an infobox for office holders. These people do not actually hold an office at this time. Therefore, the infobox shouldn't be in their articles. --User:Khajidha (talk) (contributions) 11:41, 11 December 2024 (UTC)
- Also… as an aside… technically Trump is not yet the “President Elect” … he is “President presumptive” until the electoral college reports to the Senate. Blueboar (talk) 12:55, 11 December 2024 (UTC)
- That may be factually correct, but sources are calling him "President Elect" and have been for some time. Just Step Sideways from this world ..... today 19:58, 11 December 2024 (UTC)
- Also… as an aside… technically Trump is not yet the “President Elect” … he is “President presumptive” until the electoral college reports to the Senate. Blueboar (talk) 12:55, 11 December 2024 (UTC)
Two Questions from a Deletion Review
[edit]Here are two mostly unrelated questions that came up in the course of a Deletion Review. The DRV is ready for closure, because the appellant has been blocked for advertising, but I think that the questions should be asked, and maybe answered. Robert McClenon (talk) 20:44, 11 December 2024 (UTC)
At DRV, there are sometimes requests to restore to draft space or user space material from pages that were deleted as G11, purely promotional material. DRV is sometimes the second or third stop for the originator, with the earlier stops being the deleting administrator and Requests for Undeletion.
Requests for Undeletion has a list of speedy deletion codes for which deleted material is not restored, including G11. (They say that they do not restore attack pages or copyright violation. They also do not restore vandalism and spam.) Sometimes the originator says that they are trying to rewrite the article to be neutral. My question is whether DRV should consider such requests on a case-by-case basis, as is requested by the originators, or whether DRV should deny the requests categorically, just as they are denied at Requests for Undeletion. I personally have no sympathy for an editor who lost all of their work on a page because it was deleted and they didn't back it up. My own opinion is that they should have kept a copy on their hard drive (or solid-state device), but that is my opinion.
We know that the decision that a page should be speedily deleted as G11 may properly be appealed to Deletion Review. My question is about requests to restore a draft that was properly deleted as G11 so that the originator can work to make it neutral.
I am also not asking about requests for assistance in telling an author what parts of a deleted page were problematic. In those cases, the author is asking the Wikipedia community to write their promotional article for them, and we should not do that. But should we consider a request to restore the deleted material so that the originator can make it neutral? Robert McClenon (talk) 20:44, 11 December 2024 (UTC)
- When we delete an article we should always answer reasonable questions asked in good faith about why we deleted it, and that includes explaining why we regard an article as promotional. We want neutral encyclopaedic content on every notable subject, if someone wants to write about that subject we should encourage and teach them to write neutral encyclopaedic prose about the subject rather than telling them to go away and stop trying because they didn't get it right first time. This will encourage them to become productive Wikipedians, which benefits the project far more than the alternatives, which include them trying to sneak promotional content onto Wikipedia and/or paying someone else to do that. So, to answer your question, DRV absolutely should restore (to draft or userspace) articles about notable subjects speedily deleted per G11. Thryduulf (talk) 21:09, 11 December 2024 (UTC)
- If the material is truly unambiguous advertising, then there's no point in restoring it. Unambiguous advertising could look like this:
- "Blue-green widgets are the most amazing widgets in the history of the universe, and they're on sale during the holiday season for the amazingly low, low prices of just $5.99 each. Buy some from the internet's premier distributor of widgets today!"
- If it's really unambiguous advertising to this level, then you don't need a REFUND. (You might ask an admin to see if there were any independent sources they could share with you, though.)
- When it's not quite so blatant, then a REFUND might be useful. Wikipedia:Identifying blatant advertising gives some not-so-blatant, not-so-unambiguous examples of suspicious wording, such as:
- It refers to the company or organization in the first-person ("We are a company based out of Chicago", "Our products are electronics and medical supplies").
- This kind of thing makes me suspect WP:PAID editing, but it's not irredeemable, especially if it's occasional, or that's the worst of it. But in that case, it shouldn't have been deleted as G11. WhatamIdoing (talk) 21:37, 12 December 2024 (UTC)
- Blanket permission to restore every G11 to userspace or draftspace might make sense if you're, say, an admin who's mentioned G11 only once in his delete logs over the past ten years. Admins who actually deal with this stuff are going to have a better feel for how many are deleted from userspace or draftspace to begin with (just short of 92% in 2024) and how likely a new user who writes a page espousing how "This technical expertise allows him to focus on the intricate details of design and construction, ensuring the highest standards of quality in every watch he creates" is to ever become a productive Wikipedian (never that I've seen). If it wasn't entirely unsalvageable, it wasn't a good G11 to begin with. —Cryptic 14:05, 13 December 2024 (UTC)
A Question About Administrator Accountability
[edit]Some administrators have semi-protected their talk pages due to abuse by unregistered editors. An appellant at DRV complained that they were unable to request the deleting administrator about a G11 because the talk page was semi-protected, and because WP:AN was semi-protected. An editor said that this raised Administrator Accountability issues. My question is whether they were correct about administrator accountability issues. My own thought is that administrator accountability is satisfied if the administrator has incoming email enabled, but the question was raised by an experienced editor, and I thought it should be asked. Robert McClenon (talk) 20:44, 11 December 2024 (UTC)
- Administrators need to be reasonably contactable. Administrators explicitly are not required to have email enabled, and we do not require other editors to have email enabled either (a pre-requisite to sending an email through Wikipedia), and several processes require leaving talk pages messages for administrators (e.g. ANI). Additionally, Sending an email via the Special:EmailUser system will disclose your email address, so we cannot compel any editor to use email. Putting this all together, it seems clear to me that accepting email does not automatically satisfy administrator accountability. Protecting talk pages to deal with abuse should only be done where absolutely necessary (in the case of a single editor doing the harassing, that editor should be (partially) blocked instead for example) and for the shortest amount of time necessary, and should explicitly give other on-wiki options for those who cannot edit the page but need to leave the editor a message. Those alternatives could be to leave a message on a different page, to use pings, or some other method. Where no such alternatives are given I would argue that the editor should use {{help me}} on their own talk page, asking someone else to copy a message to the admin's talk page. Thryduulf (talk) 21:22, 11 December 2024 (UTC)
- I think this is usually done in response to persistent LTA targeting the admin. I agree it should be kept short. We've also seen PC being used to discourage LTA recently, perhaps that could be an option in these cases. Just Step Sideways from this world ..... today 21:29, 11 December 2024 (UTC)
- You can't use PC on talk pages. See Wikipedia:Pending changes#Frequently asked questions, item 3. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 00:30, 12 December 2024 (UTC)
- I think this is usually done in response to persistent LTA targeting the admin. I agree it should be kept short. We've also seen PC being used to discourage LTA recently, perhaps that could be an option in these cases. Just Step Sideways from this world ..... today 21:29, 11 December 2024 (UTC)
- Very few admins protect their talk pages, and for the ones I'm aware of, it's for very good reasons. Admins do not need to risk long-term harassment just because someone else might want to talk to them.
- It would make sense for us to suggest an alternative route. That could be to post on your own talk page and ping them, or it could be to post at (e.g.,) WP:AN for any admin. The latter has the advantage of working even when the admin is inactive/no longer an admin. WhatamIdoing (talk) 21:42, 12 December 2024 (UTC)
- It's covered at Wikipedia:Protection policy#User talk pages. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 22:54, 12 December 2024 (UTC)
- That says "Users whose talk pages are protected may wish to have an unprotected user talk subpage linked conspicuously from their main talk page to allow good-faith comments from users that the protection restricts editing from."
- And if they "don't wish", because those pages turn into harassment pages, then what? WhatamIdoing (talk) 19:33, 13 December 2024 (UTC)
- Then it can be dealt with. But an admin shouldn't be uncommunicative. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 19:52, 13 December 2024 (UTC)
- Would there be value is changing that to requiring users whose talk page is protected to conspicuously state an alternative on-wiki method of contacting them, giving an unprotected talk subpage as one example method? Thryduulf (talk) 21:40, 13 December 2024 (UTC)
- For admins yes. But for regular editors it could depend on the problem. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 23:01, 13 December 2024 (UTC)
- Would there be value is changing that to requiring users whose talk page is protected to conspicuously state an alternative on-wiki method of contacting them, giving an unprotected talk subpage as one example method? Thryduulf (talk) 21:40, 13 December 2024 (UTC)
- Then it can be dealt with. But an admin shouldn't be uncommunicative. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 19:52, 13 December 2024 (UTC)
- It's covered at Wikipedia:Protection policy#User talk pages. CambridgeBayWeather (solidly non-human), Uqaqtuq (talk), Huliva 22:54, 12 December 2024 (UTC)
- In general user talk pages shouldn't be protected, but there may be instances when that is needed. However ADMINACCT only requires that admins respond to community concerns, it doesn't require that the talk pages of an admin is always available. There are other methods of communicating, as others have mentioned. There's nothing in ADMINACCT that says a protected user talk page is an accountability issue. -- LCU ActivelyDisinterested «@» °∆t° 22:33, 12 December 2024 (UTC)
Question(s) Stemming from Undiscussed Move
[edit]"AIM-174 air-to-air missile" was moved without discussion to "AIM-174B." Consensus was reached RE: the removal of "air-to-air missile," but no consensus was reached regarding the addition or removal of the "B." After a no-consensus RM close (which should have brought us back to the original title, sans agreed-upon unneeded additional disambiguator, in my opinion), I requested the discussion be re-opened, per pre-MRV policy. (TO BE CLEAR; I should have, at this time, requested immediate reversion. However, I did not want to be impolite or pushy) The original closer -- Asukite (who found for "no consensus") was concerned they had become "too involved" in the process and requested another closer. Said closer immediately found consensus for "AIM-174B." I pressed-on to a MRV, where an additional "no consensus" (to overturn) finding was issued. As Bobby Cohn pointed-out during the move review, "I take issue with the participating mover's interpretation of policy 'Unfortunately for you, a no consensus decision will result in this article staying here' in the RM, and would instead endorse your idea that aligns with policy, that a no consensus would take us back the original title, sans extra disambiguatotr."
The issues, as I see them, are as-follows:
WP:RMUM: The move from “AIM-174 air-to-air missile” to “AIM-174B” was conducted without discussion, and I maintain all post-move discussions have achieved "no consensus."
Burden of Proof: The onus should be on the mover of the undiscussed title to justify their change, not on others to defend the original title. I refrained from reverting prior to initiating the RM process out of politeness, which should not shift the burden of proof onto me.
Precedent: I am concerned with the precedent. Undiscussed moves may be brute-forced into acceptance even if "no consensus" or a very slim consensus (WP:NOTAVOTE) is found?
Argument in-favor of "AIM-174:" See the aforementioned RM for arguments in-favor and against. However, I would like to make it clear that I was the only person arguing WP. Those in-favor of "174B" were seemingly disagreeing with my WP arguments, but not offering their own in-support of the inclusion of "B." That said, my primary WP-based argument is likely WP:CONSISTENT; ALL U.S. air-to-air-missiles use the base model as their article title. See: AIM-4 Falcon, AIM-26 Falcon, AIM-47 Falcon, AIM-9 Sidewinder, AIM-7 Sparrow, AIM-54 Phoenix, AIM-68 Big Q, AIM-82, AIM-95 Agile, AIM-97 Seekbat, AIM-120 AMRAAM, AIM-132, AIM-152 AAAM, AIM-260. 174"B" is unnecessary while violating consistency.
Do my policy contentions hold any weight? Or am I mad? Do I have any path forward, here?
TO BE CLEAR, I am not alleging bad faith on behalf of anyone, and I am extremely grateful to all those who have been involved, particularly the RM closer that I mentioned, as well as the MRV closer, ModernDayTrilobite. I would like to make it clear that this isn't simply a case of a MRV 'not going my way.' Again, I am concerned w/ the precedent and with the onus having been shifted to me for months. I also apologize for the delay in getting this here; I originally stopped-over at the DRN but Robert McClenon kindly suggested I instead post here.MWFwiki (talk) 00:08, 12 December 2024 (UTC)
- Are you familiar with Wikipedia:Article titles#Considering changes? Do you think you understand why that rule exists? WhatamIdoing (talk) 23:31, 12 December 2024 (UTC)
- I am quite familiar with it. It seemingly supports my argument(s), so...? Is there a particular reason you're speaking in quasi-riddles? MWFwiki (talk) 01:11, 13 December 2024 (UTC)
- If yours is the title favored by the policy, then none of this explanation makes any difference. You just demand that it be put back to the title favored by the policy, and editors will usually go along with it. (It sometimes requires spelling out the policy in detail, but ultimately, most people want to comply with the policy.)
- If yours is not the title favored by the policy, then the people on the other 'side' are going to stand on policy when you ask to move it, so you'd probably have to get the policy changed to 'win'. If you want to pursue that, you will need to understand why the rule is set this way, so that you have a chance of making a convincing argument. WhatamIdoing (talk) 05:24, 13 December 2024 (UTC)
- I think several individuals involved in this process have agreed that the default title is the favored title, at least as far as WP:TITLECHANGES, as you say.
(The only reason I listed any further ‘litigation’ here is to show what was being discussed in-general for convenience’s sake, not necessarily to re-litigate)
However, at least two individuals involved have expressed to me that they felt their hands were tied by the RM/MRV process. Otherwise, as I mentioned (well, as Bobby_Cohn mentioned) the train of thought seemed to be “well, I don’t want the title to be changed,” and this was seemingly enough to override policy. Or, at best, it was seemingly a “well, it would be easier to just leave it as-is” sort of decision. - And again, I, 100%, should have been more forceful; The title anhould have been reverted per the initial “no consensus” RM-closure and I will certainly bear your advice in-mind in the future. That said, I suppose what I am asking is would it be inappropriate to ask the original RM-closer to revert the article at this point, given how much time is past?
MWFwiki (talk) 06:29, 13 December 2024 (UTC)- Given what was written in Talk:AIM-174B#Requested move 20 September 2024 six weeks ago, I think that none of this is relevant. "Consensus to keep current name" does not mean that you get to invoke rules about what happens when there is no consensus. I suggest that you give up for now, wait a long time (a year? There is no set time, but it needs to be a l-o-n-g time), and maybe start a new Wikipedia:Requested moves (e.g., in 2026). WhatamIdoing (talk) 19:41, 13 December 2024 (UTC)
- Thanks! MWFwiki (talk) 05:09, 14 December 2024 (UTC)
- Given what was written in Talk:AIM-174B#Requested move 20 September 2024 six weeks ago, I think that none of this is relevant. "Consensus to keep current name" does not mean that you get to invoke rules about what happens when there is no consensus. I suggest that you give up for now, wait a long time (a year? There is no set time, but it needs to be a l-o-n-g time), and maybe start a new Wikipedia:Requested moves (e.g., in 2026). WhatamIdoing (talk) 19:41, 13 December 2024 (UTC)
- I think several individuals involved in this process have agreed that the default title is the favored title, at least as far as WP:TITLECHANGES, as you say.
- I am quite familiar with it. It seemingly supports my argument(s), so...? Is there a particular reason you're speaking in quasi-riddles? MWFwiki (talk) 01:11, 13 December 2024 (UTC)
- Everything ModernDayTrilobite advised you of is correct. Vpab15 closed the RM and determined that consensus was reached. Nothing since then has overturned or otherwise superseded Vpab15's closure. Therefore that closure remains in force. You already challenged the validity of Vpab15's closure at move review, and you have no avenue for challenging it again. Your best bet is to wait a tactful amount of time (several months) before starting another RM. And in that RM, none of this procedural stuff will matter, and you will be free to focus just on making the clearest, simplest case for why AIM-174 is the best title. Adumbrativus (talk) 06:10, 13 December 2024 (UTC)
- I suppose my issue is better summed-up by my above discussion with WhatamIdoing; The MRV shouldn’t have been required. That burden should never have been on me. The title should have been reverted at the initial “no consensus” per WP:TITLECHANGES. Otherwise, undiscussed moves — when challenged — may now be upheld by either consensus or no consensus? This is not what WP:TITLECHANGES says, obviously. That said I take full responsibility for not being clearer with this argument, and instead focusing on arguing for a ‘different’ title, when I should have been arguing for the default title per TITLECHANGES. MWFwiki (talk) 06:33, 13 December 2024 (UTC)
- You've repeatedly pointed to the initial self-reverted closure as if it's somehow significant. It isn't. Asukite voluntarily decided to close the discussion, and voluntarily self-reverted their decision to close. It doesn't matter whether you asked for it or someone else asked or no one asked. They had the right to self-revert then, for any reason or no reason. The net result is the same as if Asukite had never closed it at all. Only Vpab15's closure, which was 100% on Vpab15's own authority and 0% on the supposed authority of the annulled earlier closure, is binding. Adumbrativus (talk) 09:22, 13 December 2024 (UTC)
- I don't disagree with your latter statement, but why would an initial finding of no-consensus not matter? It should have brought us back to the default title, not simply been reverted. Because that policy wasn't followed, I'm here now, is my point. Regardless, I understand; Thank you for your advice! Well, I appreciate your time and consideration! :-) MWFwiki (talk) 05:08, 14 December 2024 (UTC)
- You've repeatedly pointed to the initial self-reverted closure as if it's somehow significant. It isn't. Asukite voluntarily decided to close the discussion, and voluntarily self-reverted their decision to close. It doesn't matter whether you asked for it or someone else asked or no one asked. They had the right to self-revert then, for any reason or no reason. The net result is the same as if Asukite had never closed it at all. Only Vpab15's closure, which was 100% on Vpab15's own authority and 0% on the supposed authority of the annulled earlier closure, is binding. Adumbrativus (talk) 09:22, 13 December 2024 (UTC)
- I suppose my issue is better summed-up by my above discussion with WhatamIdoing; The MRV shouldn’t have been required. That burden should never have been on me. The title should have been reverted at the initial “no consensus” per WP:TITLECHANGES. Otherwise, undiscussed moves — when challenged — may now be upheld by either consensus or no consensus? This is not what WP:TITLECHANGES says, obviously. That said I take full responsibility for not being clearer with this argument, and instead focusing on arguing for a ‘different’ title, when I should have been arguing for the default title per TITLECHANGES. MWFwiki (talk) 06:33, 13 December 2024 (UTC)
CSD A12. Substantially written using a large language model, with hallucinated information or fictitious references
[edit]The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
When fixing up new articles, I have encountered articles that appear to have been substantially generated by AI, containing hallucinated information. While these articles may not meet other criteria for speedy deletion, as the subjects themselves are sometimes real and notable, waiting for seven days to PROD the articles is inefficient. I recommend designating WP:A12 for the speedy deletion of these articles. I have created a template (User:Svampesky/Template:Db-a12) if it is successful. A recent example is the article on the Boston University Investment Office, where the author explicitly disclosed that it was created using a large language model and contains references to sources don't exist. I initially G11'd it, as it seemed the most appropriate, but was declined, and the article was subsequently PRODed. Svampesky (talk) 21:13, 12 December 2024 (UTC)
- CSD are generally limited to things that are unambiguously obvious. I image the number of cases in which it's unabiguously obvious that the entire page was generated by an LLM (as opposed to the editor jut using the LLM to generate references, for example) are small enough that it doesn't warrant a speedy deletion criterion. --Ahecht (TALK
PAGE) 21:29, 12 December 2024 (UTC)- I like this idea but agree that it's better not as a CSD but perhaps its own policy page. Andre🚐 21:33, 12 December 2024 (UTC)
- I don't think it even merits a policy page. The number of cases where the LLM use is objectively unambiguous, and the article content sufficiently problematic that deletion is the only appropriate course of action and it cannot be (speedily) deleted under existing policy is going to be vanishingly small. Even the OP's examples were handled by existing processes (PROD) sufficiently. Thryduulf (talk) 22:11, 12 December 2024 (UTC)
- I like this idea but agree that it's better not as a CSD but perhaps its own policy page. Andre🚐 21:33, 12 December 2024 (UTC)
- @Svampesky, when you say that Wikipedia:Proposed deletion is "inefficient", do you mean that you don't want to wait a week before the article gets deleted? WhatamIdoing (talk) 23:32, 12 December 2024 (UTC)
- My view is that Wikipedia:Proposed deletion inefficient for articles that clearly contain hallucinated LLM-generated content and fictitious references (which almost certainly will be deleted) in the mainspace for longer than necessary. Svampesky (talk) 00:03, 13 December 2024 (UTC)
- Efficiency usually compares the amount of effort something takes, not the length of time it takes. "Paint it and leave it alone for 10 minutes to dry" is the same amount of hands-on work as "Paint it and leave it alone for 10 days to dry", so they're equally efficient processes. It sounds like you want a process that isn't less hands-on work/more efficient, but instead a process that is faster.
- Also, if the subject qualifies for an article, then deletion isn't necessarily the right solution. Blanking bad content and bad sources is officially preferred (though more work) so that there is only verifiable content with one or more real sources left on the page – even if that content is only a single sentence.
- Efficiency and speed is something that many editors like. However, there has to be a balance. We're WP:HERE to build an encyclopedia, which sometimes means that rapidly removing imperfect content is only the second or third most important thing we do. WhatamIdoing (talk) 00:43, 13 December 2024 (UTC)
- My view is that Wikipedia:Proposed deletion inefficient for articles that clearly contain hallucinated LLM-generated content and fictitious references (which almost certainly will be deleted) in the mainspace for longer than necessary. Svampesky (talk) 00:03, 13 December 2024 (UTC)
- This part
as the subjects themselves are sometimes real and notable
is literally an inherent argument against using CSD (or PROD for that matter). WP:TNT the article to a sentence if necessary, but admitting that you're trying to delete an article you know is notable just means you're admitting to vandalism. SilverserenC 00:07, 13 December 2024 (UTC)- The categorization of my proposal as
admitting to vandalism
is incorrect. WP:G11, the speedy deletion criterion I initially used for the article, specifies deleting articles thatwould need to be fundamentally rewritten to serve as encyclopedia articles
. Articles that have been generated using large language models, with hallucinated information or fictitious references, would need to be fundamentally rewritten to serve as encyclopedia articles. Svampesky (talk) 00:42, 13 December 2024 (UTC)- Yes, but G11 is looking for blatant advertising ("Buy widgets now at www.widgets.com! Blue-green widgets in stock today!") It's not looking for anything and everything that needs to be fundamentally re-written. WhatamIdoing (talk) 00:45, 13 December 2024 (UTC)
- (Edit Conflict) How does G11 even apply here? Being written via LLM does not make an article "promotional". Furthermore, even that CSD criteria states
If a subject is notable and the content could plausibly be replaced with text written from a neutral point of view, this is preferable to deletion.
I.e. TNT it to a single sentence and problem solved. SilverserenC 00:46, 13 December 2024 (UTC)
- The categorization of my proposal as
- The venue for proposing new criteria is at Wikipedia talk:Criteria for speedy deletion. So please make sure that you don't just edit in a new criterion without an RFC approving it, else it will be quickly reverted. Graeme Bartlett (talk) 00:20, 13 December 2024 (UTC)
- Since we are talking about BLPs… the harm of hallucinated information does need to be taken very seriously. I would say the first step is to stubbify.
- However, Deletion can be held off as a potential second step, pending a proper BEFORE check. Blueboar (talk) 01:06, 13 December 2024 (UTC)
- If the hallucination is sufficiently dramatic ("Joe Film is a superhero action figure", when it ought to say that he's an actor who once had a part in a superhero movie), then you might be able to make a good case for {{db-hoax}}. WhatamIdoing (talk) 05:26, 13 December 2024 (UTC)
- I have deleted an AI generated article with fake content and references as a hoax. So that may well be possible. Graeme Bartlett (talk) 12:23, 13 December 2024 (UTC)
- If the hallucination is sufficiently dramatic ("Joe Film is a superhero action figure", when it ought to say that he's an actor who once had a part in a superhero movie), then you might be able to make a good case for {{db-hoax}}. WhatamIdoing (talk) 05:26, 13 December 2024 (UTC)
- Isn't this covered by WP:DRAFTREASON? Gnomingstuff (talk) 20:34, 13 December 2024 (UTC)
AFD clarification
[edit]The Articles for deletion article states that:
If a redirection is controversial, however, AfD may be an appropriate venue for discussing the change in addition to the article's talk page.
Does this mean that an AFD can be started by someone with the intent of redirecting instead of deleting? Plasticwonder (talk) 04:06, 13 December 2024 (UTC)
- Yes. If there is a contested redirect, the article is restored and it is brought to AfD. voorts (talk/contributions) 04:34, 13 December 2024 (UTC)
- I think the ideal process is:
- Have an ordinary discussion on the talk page about redirecting the page.
- If (and only if) that discussion fails to reach consensus, try again at AFD.
- I dislike starting with AFD. It isn't usually necessary, and it sometimes has a feel of the nom trying to get rid of it through any means possible ("I'll suggest a WP:BLAR, but maybe I'll be lucky and they'll delete it completely"). WhatamIdoing (talk) 05:31, 13 December 2024 (UTC)
- Would need some stats on the it isn't usually necessary claim, my intuition based on experience is that if a BLAR is contested it's either dropped or ends up at AfD. CMD (talk) 05:48, 13 December 2024 (UTC)
- I agree with that. From what I have seen at least, if redirecting is contested, it then is usually discussed at AFD, but that's just me. Plasticwonder (talk) 08:42, 13 December 2024 (UTC)
- It depends how active the respective talk pages are (redirected article and target), but certainly for ones that are quiet AfD is going to be the most common. Thryduulf (talk) 09:33, 13 December 2024 (UTC)
- It will also depend on whether you advertise the discussion, e.g., at an active WikiProject. WhatamIdoing (talk) 19:44, 13 December 2024 (UTC)
- It depends how active the respective talk pages are (redirected article and target), but certainly for ones that are quiet AfD is going to be the most common. Thryduulf (talk) 09:33, 13 December 2024 (UTC)
- I agree with that. From what I have seen at least, if redirecting is contested, it then is usually discussed at AFD, but that's just me. Plasticwonder (talk) 08:42, 13 December 2024 (UTC)
- I usually just go straight to AfD. I've found that editors contesting redirects usually !vote keep and discussing on talk just prolongs the inevitable AfD. voorts (talk/contributions) 14:58, 13 December 2024 (UTC)
- Gotcha. Plasticwonder (talk) 15:29, 13 December 2024 (UTC)
- Would need some stats on the it isn't usually necessary claim, my intuition based on experience is that if a BLAR is contested it's either dropped or ends up at AfD. CMD (talk) 05:48, 13 December 2024 (UTC)
- I think the ideal process is:
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- This is now on multiple different pages.[13][14][15][16] Please discuss at Talk:Autism. WhatamIdoing (talk) 20:09, 13 December 2024 (UTC)
Hello all, I can only occasionally attend Wikipedia to edit or respond. I recently went through the current version of the Wikipedia article on Autism , and I found that this article is NOT representing the reality or encyclopedic wholeness. The huge, verbose, highly technical article is biased towards medical model of disability, medical genetics, and nearly zero information regarding the anthropology, evolution, neurodiversity, accommodation, accessibility, Augmentative and alternative communications, and all that actually helps wellbeing of Autistic people. The page boldly focuses on controversial methods such as ABA, such as EIBI (Early intensive behavioral interventions), DTT (discrete trial training) etc. without any mention of the concerns or criticisms against them. I entered the talk page, but it has been turned literally into a warzone, where any dissenting viewpoint is being silenced in name of "global and unanimous scientific consensus" which is simply wrong. It is mostly a view held by biomedical and pharmaceutical majority. But outside of that, opposing viewpoints do exist in actual Autistic populations (who have the lived experience), anthropology, sociology, psychology, etc. I added an "unbalanced" tag for reader information (I did not speak for complete erasure of controversial viewpoints, just needed the reader to know that there are other views), however the "unbalanced" tag was soon reverted.
It is not possible for me to daily attend and post arguments and counter-arguments. I have to acknowledge that, if this kind of silencing continues, this time Wikipedia literally failed as an encyclopedia, as well it failed at public health and education welfare perspective.
I feel like this needs editors' attention. Autism is NOT a well-understood condition by majority, Lived experience play the ultimate role on how a person feel about their life situation, and Nothing about us without us is an important ethics rule in disability cultures.
It worth mentioning, each disabilities are unique, and their lived experiences are different. There are generally 2 paradigms:
- (1) As if there is a fixed, "normal", gold standard "healthy people", a deviation from that is a pathology, and the society is flawless and 'just'; and any outliers must be assimilated or conformed into the mainstream, or eradicated. It externally defines what is a good life.
- (2) The second paradigm says, a disability (better said disablement, or dis-abled as a verb) is that the human bodies and minds are inherently diverse, varying, and evolving, with no single fixed "one size fits all" baseline. Also a same person can vary in multiple dimensions (such as seen in Twice exceptional), and the value of a person shouldn't depend on productivity, coincidence of wants is a fallacy; society is NOT just, it needs to be accommodated.
It seems most disabilities fall between a spectrum between a medical impairment and a social incompatibility, rather than purely one end. However, Autism, being mostly a social and communication difference, falls mostly in the second type, and seem to be addressed better with the second (inside out) approach.
If we keep arguing from a narrow perspective of medical biology, we would never know the entire scenario. RIT RAJARSHI (talk) 06:26, 13 December 2024 (UTC)
- Without commenting on the actual topic, I would say this sounds like just a content dispute localised on one article, and should be undertaken at the talk page rather than here. If there are reliable relevant sources that are in scope then this topic could be added to the article, but it is your responsibility to find those sources and to defend them if questioned. BugGhost 🦗👻 11:35, 13 December 2024 (UTC)
- Thank you, but the dispute is too intense. Also some policies like "nothing about us without us" should be in Wikipedia policy, especially about when a majority voice can suppress a marginalized voice. Esp. information those affect minority groups. Or the voices not well represented, and therefore needs amplification. RIT RAJARSHI (talk) 12:39, 13 December 2024 (UTC)
- I've just had a look at the talk page, and I don't think it is by any means too intense. You said your view, with minimal sources, and @Димитрий Улянов Иванов replied cordially to you addressing your concerns. Your reply was to say "stop name calling" (I couldn't see any evidence of name calling) and not much else. Again: I'm not commenting on the actual substance of your point of view - just that, as it stands, the talk page is the right place for this discussion, and you should engage with it in earnest with sources to back your view up. (I support any editor who wants to hat this section.) BugGhost 🦗👻 14:52, 13 December 2024 (UTC)
- Thank you, but the dispute is too intense. Also some policies like "nothing about us without us" should be in Wikipedia policy, especially about when a majority voice can suppress a marginalized voice. Esp. information those affect minority groups. Or the voices not well represented, and therefore needs amplification. RIT RAJARSHI (talk) 12:39, 13 December 2024 (UTC)
RfC: Voluntary RfA after resignation
[edit]
|
Should Wikipedia:Administrators#Restoration of admin tools be amended to:
- Option 1 – Require former administrators to request restoration of their tools at the bureaucrats' noticeboard (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions).
- Option 2 –
ClarifyMaintain the status quo that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary request for adminship (RfA). - Option 3 – Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted.
Background: This issue arose in one recent RfA and is currently being discussed in an ongoing RfA.
Note: There is an ongoing related discussion at Wikipedia:Village pump (idea lab) § Making voluntary "reconfirmation" RFA's less controversial.
Note: Option 2 was modified around 22:08, 15 December 2024 (UTC). voorts (talk/contributions) 21:14, 15 December 2024 (UTC)
Note: Added option 3. theleekycauldron (talk • she/her) 22:12, 15 December 2024 (UTC)
- 2 per Kline's comment at Hog Farm's RfA. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. charlotte 👸🎄 21:22, 15 December 2024 (UTC)
- There is ongoing discussion about this at Wikipedia:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial. CMD (talk) 21:24, 15 December 2024 (UTC)
- 1 * Pppery * it has begun... 21:25, 15 December 2024 (UTC)
- 2 I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. El Beeblerino if you're not into the whole brevity thing 21:27, 15 December 2024 (UTC)
- 2 Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. Schazjmd (talk) 21:32, 15 December 2024 (UTC)
- Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. Barkeep49 (talk) 21:38, 15 December 2024 (UTC)
- 2 per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't [request the tools back are RFA]". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. Thryduulf (talk) 21:42, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- I've started that discussion as a subsection to the linked VPI discussion. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- 1 or 3. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and User:Enterprisey/rfa-count-toolbar.js). –Novem Linguae (talk) 21:44, 15 December 2024 (UTC)
- Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. If (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, then we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to template:RFA (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. Thryduulf (talk) 22:11, 15 December 2024 (UTC)
- Option 3 looks like a good compromise. I'd support that too. –Novem Linguae (talk) 22:15, 15 December 2024 (UTC)
- I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- 2. We should not make it more difficult for administrators to be held accountable for their actions in the way they please. JJPMaster (she/they) 22:00, 15 December 2024 (UTC)
- Added option 3 above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. theleekycauldron (talk • she/her) 22:11, 15 December 2024 (UTC)
- 2 – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. — Jkudlick ⚓ (talk) 22:12, 15 December 2024 (UTC)
- Option 3 per leek. voorts (talk/contributions) 22:16, 15 December 2024 (UTC)
- 2 as per JJPMaster. Regards, --Goldsztajn (talk) 22:20, 15 December 2024 (UTC)
Discussion
[edit]- @Voorts: If option 2 gets consensus how would this RfC change the wording
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, Barkeep49 (talk) 21:43, 15 December 2024 (UTC) - I've re-opened this per a request on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. voorts (talk/contributions) 21:53, 15 December 2024 (UTC)
- The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. — Jkudlick ⚓ (talk) 22:01, 15 December 2024 (UTC)
- @Voorts, can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – bradv 22:02, 15 December 2024 (UTC)