Project:Support desk

About this board

Welcome to the MediaWiki Support desk. This is a place where you can ask any questions you have about installing, using or administrating the MediaWiki software.

(Read this message in a different language)

See also

Before you post

Post a new question

  1. To help us answer your questions, please indicate which version of MediaWiki you are using, as found on your wiki's Special:Version page:
  2. If possible, add $wgShowExceptionDetails = true;error_reporting( -1 );ini_set( 'display_errors', 1 ); to LocalSettings.php in order to make MediaWiki show more detailed error messages.
  3. Please include the web address (URL) to your wiki if possible. It's often easier for us to identify the source of the problem if we can see the error directly.
  4. To start a new thread, click the box with the text "Start a new topic".

Force thumbnail generation with PdfHandler?

6
Kjecl (talkcontribs)

I recently performed an reinstall and database restore of a Wiki to fix a different problem. An unexpected positive side effect was that thumbnails are now generated for new PDF files uploaded to the Wiki. Cool!

Is there a way to force the generation of thumbnails for previously uploaded PDFs? I have tried:

php refreshImageMetadata.php -f

followed by:

php rebuildImages.php

without effect.

Thanks for any suggestions.

Bawolff (talkcontribs)

What do the old images look like? Is there an error message instead of a thumbnail or is it just missing. Does mediawiki show the correct dimensions for the files in question (as in, does it say it is 0x0 pixels or does it give a real number)?

Kjecl (talkcontribs)

Thank you for your interest in my problem.

The thumbnails for older files are a red stylized 'A' on a white background. Nope, no error message.

If I go to the Special:ListFiles page, and click on a red A icon, it shows dimensions of 0 x 0.

I wonder if I could use importImages.php to reload all of the older files. An experiment is in order for one of those files, I think.

Kjecl (talkcontribs)

Yes, I can use importImages.php to reload an older file. No, it does not help.

Bawolff (talkcontribs)

So the 0x0 usually means that Extension:PdfHandler had problems running the pdfinfo command, which would prevent the creating of thumbnails.

However if new files are working, then that means that it should be able to run the pdfinfo command.

In general, running refreshImageMetadata.php -f should fix that. Does the script actually say it refreshed the file in question? If it still isn't working, i could see two possible problems:

  • some sort of cache issue. You could try adding ?action=purge (or &action=purge ) to the end of the File: page (the page in the wiki, not the actual file) to see if that does anything to help
  • the older files aren't readable by the webserver. This could potentially happen if when you transfered webservers the image files had the wrong permissions or some other sandboxing system prevented access.

Both of those feel kind of a long shot though.

Of course its always possible the file being tested is just a broken pdf.

Kjecl (talkcontribs)

I used:

ls -lR | grep -v -e <string>

at the top level of the image upload folder, where string was either the owner or the permissions, to check that. I did not find any files with the wrong permissions or ownership.

refreshImageMetadata.php -f

does not say that it refreshed the files (~80 of them) in question. pdfinfo is able to run against all of the files in question sufficiently to report the PDF version. PDF versions for these files range from 1.3 to 1.7.

FILES=$(find $PREFIX -name "*.pdf" -type f | grep -v "/temp" | grep -v "/deleted" | grep -v "/archive" )

for F in $FILES; do

   echo $(pdfinfo $F | grep "PDF version")    $F

done

So, there are about 80 of these files with the 'red A' thumbnails. Apart from the likelihood of error, we could re-upload all of them to get the new thumbnails. We would prefer not to need to take that approach. Also, there is the concern that this symptom betides some problem that will bite us later.

Thanks again for your help.

Reply to "Force thumbnail generation with PdfHandler?"
2A02:A03F:66D9:F301:512D:41C:6D44:DFFF (talkcontribs)

Hello everyone !

I recently installed mediawiki with the 'Parser' extension.

But when I try to log in to my wiki, I get this error :

"[ZkDhPaXGad5SVb3nIP2RQAAAAAM] /wiki_test2/index.php?title=Accueil MWException: Parser state cleared while parsing. Did you call Parser::parse recursively? Lock is held by: #0 /var/www/alternc/t/titouan/www/queernet.xyz/wiki_test2/includes/parser/Parser.php(882): Parser->lock()"

Does anyone have an idea what might be causing this ?

Thanks :)

Bawolff (talkcontribs)

I have never heard of an extension named "Parser". Presumably the extension is broken.

2A02:A03F:66D9:F301:512D:41C:6D44:DFFF (talkcontribs)

i think the extension is called ParserFunctions. its under the Syntax Analyser Add-ons options when you install mediawiki

Bawolff (talkcontribs)

There is a very big difference between "Parser" and "ParserFunctions".

Normally the error message should be longer then that. The rest of the error message likely provides information on the cause of the problem.

Syntax Analyzer Add-ons is not a product we make (nor have i ever heard of it). If you are having trouble with a third party distribution of MediaWiki, consider using the official version at Download. The amount of help we can provide for unofficial distributions made by third parties is limited.

2A02:A03F:66D9:F301:2D10:2604:D876:5217 (talkcontribs)

Hey,

I don't think this is a third party issue since I downloaded via the official mediawiki site. However I might have gotten the name of the categories wrong since I installed it in French (it was called 'greffons d'analyseur syntaxique') and I'm not sure what the English name is.

The whole error message is pretty long so I'm going to spilt it to avoid getting blocked. Here is the first part :

[ZkIaXNlSSMkzRXDjL4XVsgAAACk] /wiki_test2/index.php?title=Accueil MWException: Parser state cleared while parsing. Did you call Parser::parse recursively? Lock is held by: #0 /var/www/alternc/t/titouan/www/queernet.xyz/wiki_test2/includes/parser/Parser.php(882): Parser->lock()

2A02:A03F:66D9:F301:2D10:2604:D876:5217 (talkcontribs)
Bawolff (talkcontribs)

I did not get the error when going to your site (maybe it only happens if not logged in), but the image of the error was what i needed.

I think its unlikely that this error has anything to do with the ParserFunctions extension.

Can you double check to make sure that your version of vector skin is the correct version for your wiki? Does the error message happen on other skins too or just vector?

If that doesn't work, i would suggest disabling all skins & extensions, then re-enabling them one by one to see if a specific extension/skin is causing it.

P.s. if there is something you're not sure what the english name of is, just also include the french name.

Reply to "parser error"

Scripted initial population of MediaWiki database

3
Dstahlberg (talkcontribs)

As part of an Ansible-scripted setup of a MediaWiki site, I need to configure MediaWiki. The LoadSettings.php can be prepared in advance and copied to the site, but is there a scripted way to initially populate the database without manually accessing the Installation page?

Thanks for any pointers and tips!

Bawolff (talkcontribs)

Use install.php script in the maintenance directory which can be run from commandline.

Alternatively you could just take a database dump of a fresh install and import that.

Bawolff (talkcontribs)

P.s. you might also be interested in the Meza project

Reply to "Scripted initial population of MediaWiki database"

Requesting help using API action=clientlogin

18
Mrwassen (talkcontribs)

Hi guys,

As a relative noobie to mediawiki I am looking to get a little help regarding a very basic SSO project. I have a php based web site "mysite" which requires user login. I am trying to come up with a php script that does the following:


1) User logs into mysite

2) The user login scripts executes a "wikilogin.php" script

3) The wikilogin.php script logs into mywikisite and creates the required cookies in the browser

4) User can now go to mywikisite and access pages etc. without having to log into mywikisite


I am not trying to build any logic to manage user creation/change/deletion or password change. The assumption for now is simply that credentials are identical across the 2 applications.


I have tried to put together a basic "wikilogin.php" which uses the mediawiki api as follows:

  a) Get a logintoken using "api?action=query&meta=tokens&type=login&format=json

  b) Parse out the returned logintoken to a string variable

  c) Perform the login using "api?action=clientlogin&username=joe&password=secret&logintoken=<token from step 2>"&loginreturnurl=http:mysite,org"


however I am running into the error:

  "code": "badtoken", "info": "Invalid CSRF token."


I have tried to change "type"="csrf" in step 1), however then I get:

  "code": "nologintoken","info": "The \"logintoken\" parameter must be set."


Below is the php - any help would be much appreciated.

Thanks

Dennis


//

<?php

$ch = curl_init();

curl_setopt($ch, CURLOPT_URL,"http://mywikisite.org/api.php");

curl_setopt($ch, CURLOPT_POST, 1);

curl_setopt($ch, CURLOPT_POSTFIELDS,

            "action=query&meta=tokens&type=login&format=json");

// Receive server response ...

curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

$server_output = curl_exec($ch);

//echo $server_output;

$logintoken_array = json_decode($server_output);

$logintoken = $logintoken_array->query->tokens->logintoken;

echo $logintoken;

curl_setopt($ch, CURLOPT_URL,"http://mywikisite.org/api.php");

curl_setopt($ch, CURLOPT_POST, 1);

curl_setopt($ch, CURLOPT_POSTFIELDS,

"action=clientlogin&username=joe&password=secret&logintoken=" . $logintoken . "&loginreturnurl=http://mywikisite.org");

// Receive server response ...

curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

$server_output = curl_exec($ch);

echo $server_output;

curl_close ($ch);

?>

Mrwassen (talkcontribs)

Forgot to mention versions:

mediawiki 1.34.4

php 7.2

Bawolff (talkcontribs)

i think you need to tell curl to save and send cookies for the login to work.


You may also be interested in reading about SessionManager - which i think is the more proper way to do what you are trying to do in MediaWiki.

Mrwassen (talkcontribs)

Hi Bawolff,


Thanks for your help - I was able to make some progress: I rewrote the php script to write a cookie file and then login using action=clientlogin which thankfully returned the following response:

{ "clientlogin": { "status": "PASS", "username": "Admin" }}

However, I think I am still missing something: once this login was successful I was expecting in the same browser to be able to open a mediawiki page without logging in, however the main page is still showing "not logged in".

I also noticed that after the script successfully logged in, there was nothing listed in the browsers "Storage/Cookies" list under the domain.

Is this a case of me not understanding how cookies work?

Any help appreciated.

Dennis

EDIT: or will I need to programmatically open the wiki page from php using something like Headers() after using setcookies() to set the cookies?

EDIT#2:

OK so a little further progress:

I logged in as normal directly through the wiki main page to determine cookie behavior and saw that 3 cookies are created:

1) session cookie containing a token

2) username cookie

3) user ID cookie

I then added code to the php script which replicates these exact cookies after the login is completed using the newly acquired token to create the session cookie.

What I notice is that I can run my php script and see the 3 cookies get created and when the script echos the page, it shows as logged in.

However the moment I click on a link to go to a different wiki page, the user ID and session cookies disappear and only the user ID cookie remains "in the browser" and the page logs out.

So it seems I am close, but the elusive part is how to get those cookies to persist so that the session stays logged in?


php:


<?php

$cookie_jar = tempnam('/volume1/web/cookies','cookie');

//retrieve token

$c = curl_init('http://mywikisite/api.php?action=query&meta=tokens&type=login&format=json');

curl_setopt($c, CURLOPT_POST, 1);

curl_setopt($c, CURLOPT_HEADER, 0);

curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);

curl_setopt($c, CURLOPT_COOKIEFILE, $cookie_jar);

curl_setopt($c, CURLOPT_COOKIEJAR, $cookie_jar);

$page = curl_exec($c);

$logintoken_array = json_decode($page);

$logintoken = $logintoken_array->query->tokens->logintoken;

echo $logintoken;

curl_close($c);

//log in

$c = curl_init('http://mywikisite/api.php');

$post = [

    'action' => 'clientlogin',

'password' => 'xxxxxxxxx',

    'username' => 'admin',

    'logintoken' => $logintoken,

'loginreturnurl' => 'http://1mywikisite/index.php'

];

$c = curl_init('http://mywikisite/tng/wiki/api.php');

curl_setopt($c, CURLOPT_POST, 1);

curl_setopt($c, CURLOPT_POSTFIELDS, $post);

curl_setopt($c, CURLOPT_HEADER, 0);

curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);

curl_setopt($c, CURLOPT_COOKIEFILE, $cookie_jar);

curl_setopt($c, CURLOPT_COOKIEJAR, $cookie_jar);

$page = curl_exec($c);

curl_close($c);

//create 3 cookies

$cookie_name = "tng_upgrade_12_3_wiki__session";

$cookie_value = $logintoken;

setcookie($cookie_name, $cookie_value, time() + (86400 * 30), "/"); // 86400 = 1 day

$cookie_name = "tng_upgrade_12_3_wiki_UserID";

$cookie_value = "1";

setcookie($cookie_name, $cookie_value, time() + (86400 * 30), "/"); // 86400 = 1 day

$cookie_name = "tng_upgrade_12_3_wiki_UserName";

$cookie_value = "Admin";

setcookie($cookie_name, $cookie_value, time() + (86400 * 30), "/"); // 86400 = 1 day

//open wiki

$c = curl_init('http://mywikisite/index.php?title=Main_Page');

curl_setopt($c, CURLOPT_HEADER, 0);

curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);

curl_setopt($c, CURLOPT_COOKIEFILE, $cookie_jar);

curl_setopt($c, CURLOPT_COOKIEJAR, $cookie_jar);

$page = curl_exec($c);

echo $page;

curl_close($c);

?>

Bawolff (talkcontribs)

The normal approach would be for your script to set its own cookies, and then use sessionmanager so that mediawiki recognizes those cookies and then sets its own cookies appropriately.


I think the reason why your code isnt working is because you are assuming that the login token will be the same as the session cookie value (i dont think it is)

Mrwassen (talkcontribs)

Hi Bawolff.,

Thanks again - based on your earlier reply, I managed to figure out that the returned token is NOT the same as the session cookie token (as you also mentioned above). I was able to get curl to write the cookies to a temporary "cookie jar" file which I then read, parse into an array after which I submit a set of setcookie() commands which create the cookies in the browser.

The one minor issue I had is that it seems that action=clientlogin does not (yet?) support a "rememberMe" attribute (despite documentation showing an example using it), but since I am now fairly fluent in cookie baking, I am able to tweak the expiry time to get a session length I would like.

I am only able to use v. 1.34.4 due to other constraints, but perhaps rememberMe was introduced in 1.35.xx?

In any case, many thanks for your help!

Thanks

Dennis

Sasha Rizzetto (talkcontribs)

Hi Mrwassen,

did you finally manage to make the script working so that the session stays logged in? And if yes, could you please share your code?

Thanks

Mwkaryar (talkcontribs)

Hi Mrwassen,

would you please help me?

I'm new here and I'm a little confused. this is exactly what we want:

1-our users go to the Media Wiki login form (they have no account in Media Wiki)

2- they fill the username and password input by their username and password(the username and password which they use to login to our website)

now we want:

3-the username and password have to be sent to our website API link:

ourSiteDomain.com/Api/GetPersonRow?UserName=$userName&Password=$password

*** The $username and $password are filled in step 2

4-our API returns a JSON response with user info without password

5-if the user is really our user and API response returns true, the user can log in to Media Wiki

these steps are what we want.

I don't know how to do these steps

thank you very much

Mrwassen (talkcontribs)

I am confused by the statement "1-our users go to the Media Wiki login form (they have no account in Media Wiki)".

If they have no account, then it wouldnt make sense for them to type anything into the mediawiki login form ? Will they eventually become mediawiki users?

That being said, I am probably not the right person to guide you due to my relative lack of MW experience - hopefully more knowledgeable folks will chime in here.

Bawolff (talkcontribs)

Details might vary depending on which login extensions are installed, your wiki may be different than wikipedia. Check api.php?action=query&meta=authmanagerinfo&amirequestsfor=login on your wiki for what the remember me field is named.

Mrwassen (talkcontribs)

Bawolf: thanks for that advice, I will do some more digging on this.

Sasha: yes I finally managed to get this working, below is the code. Modify with your login details, URL's and temp folder path then run the script. Once you have run the script, you should find the cookie jar file in the temp folder as well as "login_details.txt" which will contain the login token and cookie details which were used to create the cookies.

(Since the final curl call redirects a header with the cookies, it is not possible to do any echos prior to that, hence the "log file").

Thanks

Dennis

<?php

    //open file for logging progress:

    $temp_folder = '/volume1/web/cookies/';

$log = fopen($temp_folder . 'login_details.txt', 'w');

    //create cookie jar file to temporary store cookie data

$cookie_jar = tempnam($temp_folder,'cookie');

$tokenurl='http://192.168.xxx.xxx:90xx/wiki/api.php?action=query&meta=tokens&type=login&format=json';

// acquire mediawiki login token

$c = curl_init($tokenurl);

curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);

curl_setopt($c, CURLOPT_COOKIEFILE, $cookie_jar);

curl_setopt($c, CURLOPT_COOKIEJAR, $cookie_jar);

$page = curl_exec($c);

$logintoken_array = json_decode($page);

$logintoken = $logintoken_array->query->tokens->logintoken;

    fwrite($log,'Token = ' . $logintoken . PHP_EOL . PHP_EOL);

// log in to mediawiki using action=clientlogin

curl_close($c);

$post = [

'action' => 'clientlogin',

'username' => '<your username>',

'password' => '<your password>',

'logintoken' => $logintoken,

'loginreturnurl' => 'http://192.168.xxx.xxx:90xx'

    ];

$loginurl='http://192.168.xxx.xxx:90xx/wiki/api.php';

$c = curl_init($loginurl);

curl_setopt($c, CURLOPT_POST, 1);

curl_setopt($c, CURLOPT_POSTFIELDS, $post);

curl_setopt($c, CURLOPT_HEADER, 0);

curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);

curl_setopt($c, CURLOPT_COOKIEFILE, $cookie_jar);

curl_setopt($c, CURLOPT_COOKIEJAR, $cookie_jar);

//write cookies to the file $cookie_jar

$page = curl_exec($c);

curl_close($c);

//extract cookies from $cookie_jar

$file=$cookie_jar;

$fopen = fopen($file, "r");

$fread = fread($fopen,filesize($file));

fclose($fopen);

$remove = "\n";

$split = explode($remove, $fread);

$array[] = null;

$tab = "\t";

foreach ($split as $string) {

$row = explode($tab, $string);

array_push($array,$row);

}

    fwrite($log, 'Cookie details:' . PHP_EOL);

for ($x = 5; $x <= 9; $x++) {

$cookie_name =  $array[$x][5];       

        if (isset($cookie_name)) {

$cookie_expire = $array[$x][4];

$cookie_value = $array[$x][6];

setcookie($cookie_name, $cookie_value, $cookie_expire, "/");

fwrite($log, $cookie_name . '|' . $cookie_value . '|' . $cookie_expire . PHP_EOL);

}

}

//delete the file $cookie_jar

//if (file_exists($cookie_jar)) {

//unlink($cookie_jar);

//    }

    fclose($log);

?>

Sasha Rizzetto (talkcontribs)

Thank you very much Mrwassen, it works, if I run the script I get the cookies set, but if I open my MediaWiki homepage I'm not actually logged in. Am I missing something?

Mrwassen (talkcontribs)

Hi,

Understand that I am not very knowledgeable about this topic. That being said, you should check that 4 cookies are created after the login attempt (at least that is what I am seeing) - these are:

Name = xxxx_wiki_UserName | Value = the wiki user name e.g. "Admin"

Name = xxxx_wiki_UserID | Value = the wiki user ID e.g. 1

Name = xxxx_wiki_Token | Value = a 32 digit alphanumeric token

Name = xxxx_wiki_session | Value = a 32 digit alphanumeric token

The presence of the session cookie may depend upon whether you include "rememberMe = 1" as a parameter in your action=clientlogin (Im not 100% sure).

If you don't at least get the 3 first cookies listed above then your login wasn't successful.

Another thing to try would be to modify your script adding this line:


echo $page;


right AFTER


$page = curl_exec($c);


Then you should get the result of the login attempt something like "FAIL" or "SUCCESS"


Thanks


Dennis

Sasha Rizzetto (talkcontribs)

Thank you very much MrWassen, I managed make it working thanks to your tips!


Greetings

Sasha

Robinhair2441 (talkcontribs)

I copied your code changed the URL settings etc and managed to get a successful login: {"clientlogin":{"status":"PASS","username":"Wiki member"}}

However, my Wiki home page does not actually load. I just get a blank page with the above 'clientlogin' status message. I assumed using the Mediawiki api.php with clientlogin would login AND load the webpage.

I tried loading the webpage at the end of the script with: header( "Location: https: // xxxxxx.xxx/mediawiki/index.php" ); but that loaded the Wiki home page without showing the above login.

Am I missing the final step to load the page with the required user already logged in?

Environment:

Joomla 3.9.21

php 7.4.14

Sourcerer 8.4.2

MediaWiki 1.34.2


Thanks

Robin

Robinhair2441 (talkcontribs)

I've done a bit more research on this and found the login doesn't persist. Immediately after the api call to 'ClientLogin' I've queried the login status like this:

.....

//write cookies to the file $cookie_jar

$page = curl_exec($c);

echo $page;

curl_close($c);

//check login

$loginurl='https:// xxxxxxx.xxx/mediawiki/api.php?action=query&meta=authmanagerinfo&amirequestsfor=login';

$c = curl_init();

curl_setopt($c, CURLOPT_URL, $loginurl);

$page = curl_exec($c);

echo $page;

curl_close($c);


...and the resulting api call shows no logged in account!

Any thoughts on what could be preventing the Login from persisting?

Thanks

Robin


OUTPUT......

{"clientlogin":{"status":"PASS","username":"HRCA member"}}

This is the HTML representation of the JSON format. HTML is good for debugging, but is unsuitable for application use.

Specify the format parameter to change the output format. To see the non-HTML representation of the JSON format, set format=json.

See the complete documentation, or the API help for more information.

{
    "batchcomplete": "",
    "query": {
        "authmanagerinfo": {
            "canauthenticatenow": "",
            "cancreateaccounts": "",
            "preservedusername": "",
            "requests": [
                {
                    "id": "MediaWiki\\Auth\\PasswordAuthenticationRequest",
                    "metadata": {},
                    "required": "primary-required",
                    "provider": "Password-based authentication",
                    "account": "",
                    "fields": {
                        "username": {
                            "type": "string",
                            "label": "Username",
                            "help": "Username for authentication."
                        },
                        "password": {
                            "type": "password",
                            "label": "Password",
                            "help": "Password for authentication.",
                            "sensitive": ""
                        }
                    }
                },
                {
                    "id": "MediaWiki\\Auth\\RememberMeAuthenticationRequest",
                    "metadata": {},
                    "required": "optional",
                    "provider": "MediaWiki\\Auth\\RememberMeAuthenticationRequest",
                    "account": "MediaWiki\\Auth\\RememberMeAuthenticationRequest",
                    "fields": {
                        "rememberMe": {
                            "type": "checkbox",
                            "label": "Keep me logged in",
                            "help": "Whether the password should be remembered for longer than the length of the session.",
                            "optional": ""
                        }
                    }
                }
            ]
        }
    }
}
Robinhair2441 (talkcontribs)

I'm new to how cookies are stored and formatted. Do these entries look valid? Is the 'FALSE' aspect a concern? [I've xed out the domain name]


# Netscape HTTP Cookie File

# http://curl.haxx.se/docs/http-cookies.html

# This file was generated by libcurl! Edit at your own risk.

#HttpOnly_xxxx.xxx FALSE / TRUE 0 www_mediawiki_mw__session v5cotr45k6i5gs4infvhvjmkhpl9dhg8

#HttpOnly_xxxx.xxx FALSE / TRUE 1612804602 www_mediawiki_mw_UserID 1039

#HttpOnly_xxxx.xxx FALSE / TRUE 1612804602 www_mediawiki_mw_UserName Wiki%20member

#HttpOnly_xxxx.xxx FALSE / TRUE 1612718212 UseDC master

#HttpOnly_xxxx.xxx FALSE / TRUE 1612718212 UseCDNCache false <=====Is this likely to cause a problem?

136.226.244.178 (talkcontribs)

did the redirection work ? i get blank page


{ "clientlogin": { "status": "PASS", "username": "Admin" } }

Reply to "Requesting help using API action=clientlogin"

cannot login on mediawiki , but meta and common are operational

4
Summary by Wladek92

ok, restored.

2A02:842A:82DC:C301:E5A8:F3A5:3BAD:3B1F (talkcontribs)

MediaWiki internal error.

Original exception: [4cae876e-4aa1-4e16-a554-02af148f2bf7] 2024-05-14 09:31:55: Fatal exception of type "Wikimedia\Rdbms\DBQueryError"

Exception caught inside exception handler.

Set $wgShowExceptionDetails = true; at the bottom of LocalSettings.php to show detailed debugging information.

TheDJ (talkcontribs)

This is due to a weekly deploy of the new version of mediawiki, which is experiencing a problem. It should soon be fixed.

2A02:842A:82DC:C301:E5A8:F3A5:3BAD:3B1F (talkcontribs)
Wladek92 (talkcontribs)

ok retrieved now , thanks.

SimpleSAMLPhp customize authenticate error message

1
86.245.217.89 (talkcontribs)
Reply to "SimpleSAMLPhp customize authenticate error message"

what does ann mean in the persons spouses section

2
2600:1702:2230:C00:7137:398A:4D0F:8D79 (talkcontribs)

what does ann mean in the persons spouses section

Malyacko (talkcontribs)

We don't know. This is a support forum for the MediaWiki software itself, not for random content on pages maybe hosted via MediaWiki. Feel free to ask on whatever website you found this...

Reply to "what does ann mean in the persons spouses section"

account problem - 2 confused together

2
Picnics (talkcontribs)

Who can I contact about 2 accounts with the same username, that the system seems to have been confused together.

After some time away, I signed in yesterday and saw the account I recognized which was created in 2006 and had 20 contributions.

When I signed in today to update my current email address, it said the account was created in 2014 with NO contributions.

Thanks

Ciencia Al Poder (talkcontribs)
Reply to "account problem - 2 confused together"

Getting off a website and attracting contributors

1
Guillaume Taillefer (talkcontribs)

Hello, I was wondering if there was a specific place where you can post about your wiki and try and ask for contributors to your site, mainly for experienced wiki users. Thanks

Reply to "Getting off a website and attracting contributors"

Logged in. But not on Boobpedia

2
Summary by Arlo Barnes

Not a unified account

Erikasteele (talkcontribs)

I'm logged in here but not Boobpedia i don't think. I'm not getting the little white star to watch pages yet. I've tried resting throygh the email but but getting the emails . Any advice? Please and thank you. Erika

Arlo Barnes (talkcontribs)

Although Boobpedia's login page links to MediaWiki's help pages (because Boobpedia uses the MediaWiki software), the two projects are unconnected and don't use the same login. You can still edit without having a login; your edits are recorded as being from your IP address. If you still want an account, https://boobpedia.com/boobs/Special:ListUsers?creationSort=1&desc=1 shows the most recently-made accounts. Perhaps you could ask an active new account how they were able to sign up despite the registration page being disabled at Boobpedia.