here is below fixed badge
<img alt="Static Badge" src="https://img.shields.io/badge/Language-Japanese-red?style=flat-square&link=https%3A%2F%2Fgithub.com%2FTokynBlast%2FpyTGM%2Fblob%2Fmain%2FREADME.jp.md">
Shields.io needs this format:
https://img.shields.io/badge/<label>-<message>-<color> Without all parts, it shows a broken image.
now you can display your badge
document.querySelector(".clickedelement").addEventListener("click", function (e) {
setTimeout(function(){
if(document.querySelector(".my-div-class").style.border !== "none"){
document.querySelector(".my-div-class").style.border = "none";
}else{
document.querySelector(".my-div-class").style.border = "1px solid black";
}
},1000);
});
div {
width: 100px;
height: 100px;
background-color: yellow;
border-radius: 50%;
}
<p class="clickedelement">when this is <b>clicked</b> i want border added and removed (after 1s) on div below</p>
<div class="my-div-class"></div>
You can do it with css.
document.querySelector(".clickedelement").addEventListener("click", function (e) {
// add and remove border on div
});
div {
width: 100px;
height: 100px;
background-color: yellow;
border-radius: 50%;
border: 2px solid #ff000000;
transition: border 2s ease-out;
}
.clickedelement:active~div{
border: 2px solid #ff0000;
transition: border 100ms ease-out;
}
<p class="clickedelement">when this is clicked i want border added and removed (after 1s) on div below</p>
<div class=""></div>
Not directly related, but anyone facing a similar issue now:
snowflake.connector.errors.OperationalError: 254007: The certificate is revoked or could not be validated: hostname=xxxxxxxxx
Upgrading snowflake-python-connector to 3.15.0 helped me resolve it.
Reference: https://github.com/orgs/community/discussions/157821#discussioncomment-12977833
Did you ever figure this out? I'm having the same issue.
you should use the same session of your DF:
df.sparkSession.sql("select count(*) from test_table limit 100")
After removing docker desktop. restart your computer and wsl will revert back to the docker ce version
It seems that I can't comment, so I have to leave another answer.
@rzwitserloot - I like you're very thorough answer about why using existing "stuff" is confusing and difficult to implement.
I like to through out a suggestion though. On the simpler annotations that only generate one thing (NoArgs, AllArgs, and etc.) don't reuse existing annotations. Add a new parameter @NoArgsConstructor( javadoc=" My meaningful Description - blah, blah, blah, \n new line of more blah \n @Author Me \n @Version( 1.1.1)")
This would generate (literally, exactly the text provide except the comment code)
/**
* My meaningful Description - blah, blah, blah,
* new line of more blah
* @Author Me
* @Version( 1.1.1)
*/
Use only minimal interpretation, in my example only the "\n" for a new line and maybe add the tab "\t".
Another option would be to only allow actual Tabs and NewLines inside the quotes and then the only 'processing' would be to add the comment characters.
My justification for this answer is that JavaDoc seems to produce a lot of messages as WARNINGS. It is very easy to miss more obvious problems because they are lost in WARNINGS. I make an effort to clear out warnings so that I don't miss problems.
I understand this may be more difficult than I am making it out, but my goal is to get rid of the warnings so that I don't miss other important messages.
Thanks!
I haven't looked deeply into Lombok code, but this seems like a reasonable solution.
Nice solution. (no new answer. I modified the original question)?
%macro create_table();
PROC SQL;
CREATE TABLE TEST AS
SELECT DATE, NAME,
%DO i = 1 %to 3;
0 AS A&i.,
%END;
1 as B
FROM SOURCE;
QUIT;
%mend create_table;
%create_table();
Can this be expanded to allow a let evaluation within the loop (or something else that will hold a new macro-var)?
I have a large number of columns that in my case look over 13qtr of data, and within each bucket of 40 or so columns 13(40) there are a good number that look back for year ago data.
Thus I need something like:
let iPY = %eval(&i. +4)
but would love to avoid the +4 calc for each needed column in a qtr.
Upgrading to Python 3.13 solved the issue, so the explanation is probably a version incompatibility with Windows 10 and Python 3.12 for this particular case.
You can check and verify the Form Validations.
Check the File Object and the length of the file. Is there any broken image that is not going to upload?
You can create an array of new values that you want to update.
Then create the $user Object and update the values. Hope you will care all the points.
Thanks
Is there a work around for direct query? You cannot use unpivot in direct query from what I have found.
You can create a new Service Account and generate new keys
from dotenv import load_dotenv
import os
load_dotenv()
api_key = os.getenv("BYBIT_API_KEY")
api_secret = os.getenv("BYBIT_API_SECRET")
this works for 1 time series but how would it work for a multiobject set?
We just went just went through the process of trying to get premium data access from Google with no success - it seems that they are extremely strict about which use-cases they give access to. I am building the reputation platform ReviewKite and we eventually decided to use the BrightLocal API under the hood to scrape reviews from Google. It's expensive, but worth it for easy of use.
I got it. The main reason why, after each row, I was getting the null rows was because of hidden characters in the CSV file
Windows ends lines with Carriage Return + Line Feed (\r\n)
Unix/Linux uses just Line Feed (\n)
If a file created on Windows is read on a Unix/Linux system (like Hadoop/Hive), the \r character can:
Show up as "invisible junk" (like ^M or CR)
Break parsers or formatters (like Hive or awk), resulting in:
Extra blank rows
All NULL columns
Malformed data
So that's the reason why I was getting empty null rows after each valid data row,
Sol: I used dos2unix, which converts our files to Linux format, and I got the expected result.
import 'package:firebase_auth/firebase_auth.dart';
I've been stuck on that for a while now, and I've been carrying the fix you mentioned on different version of yocto, I wasn't proud because nobody else did that except me, so the problem was not yocto and was probably coming from elsewhere. When moving to scarthgap this fix was not doing his job anymore, so I had to find the root cause.
I was building my libraries with a Makefile like so:
lib:
$(CC) -fPIC -c $(LDFLAGS) $(CFLAGS) $(CFILES)
$(CC) $(LDFLAGS) -shared -o libwhatever.so.1 $(OFILES)
ln -sf libwhatever.so.1 libwhatever.so
What was missing is that I needed to add:
LDFLAGS += -Wl,-soname,libwhatever.so.1
so that yocto QA thingy is able to actually find the proper names directly inside the .so files.
If you want to verify if that fix is for you, you can check the SONAME like so:
readelf -d tmp/work/<machine>/libwhatever/<PV>/image/usr/lib/libwhatever.so* | grep SONAME
0x000000000000000e (SONAME) Library soname: [libwhatever.so.1]
If you haven't anything from this command then you have the same issue I had and the above fix will work for you.
In flutter iOS, after setup signing and other things from xcode, go back to android studio and run clean and run ios from android studio as first run.
Check dir and change dir accordingly or enter full path from the location of main php file.
echo '<br>'.__DIR__;
echo getcwd();
chdir(__DIR__);
here are more images to illustrate the problem. Below is for regex.h
Below is for "myconio.h"
I have handled a similar use case with a 'Bot' in AppSheet that listens for a 'Success' or 'Error' value returned from the API call and then branches on the value of that returned data to send an in-app notification to the user and a message to the app administrator that an API call has failed or to do nothing and proceed with the next step. Example automation setup below. I can post more details if that seems like something that would work in your situation.
i don't know i kniot idns wgich is you i am from austrialla and you ?
I have my controller with the tips you said but the borders are still not corrected. When I go to initialize my panel to set the background the command:
yourPane.setBackground(new Background(new BackgroundFill(Color.TRANSPARENT, CornerRadii.EMPTY, Insets.EMPTY)));
I get error in Insets.EMPTY, I tried to fix it like this but I still don't see the borders corrected.
void initialize() { confirmDialog.setBackground(new Background(new BackgroundFill(Color.TRANSPARENT, CornerRadii.EMPTY, new Insets(0)))); }
Ah, I found the answer on Reddit. Doing SUM(Visited)/Sum(Total)*100 seems to have worked
There is a bug in Sidekiq 8, see this. https://github.com/sidekiq/sidekiq/issues/6695
I had the same problem, updating tensorflow to the latest version (2.19) solved everything
For those who with older project settings without Gradle version catalogs
define your copose_compiler_version = '2.0.0' in the project gradle file
buildscript {
ext {
compose_compiler_version = '2.0.0'
add plugins to the project gradle file
plugins {
id("org.jetbrains.kotlin.plugin.compose") version "$compose_compiler_version" // this version matches your Kotlin version
}
------------------------
add plugins to the module gradle file
plugins {
id "org.jetbrains.kotlin.plugin.compose"
}
update your module gradle file dependencies
replace old androidx.compose.compiler:compiler to new org.jetbrains.kotlin:kotlin-compose-compiler-plugin-embeddable
dependencies {
implementation "org.jetbrains.kotlin:kotlin-compose-compiler-plugin-embeddable:$compose_compiler_version"
if you have composeOptions in your module gradle file, also update the version
composeOptions {
kotlinCompilerExtensiionVerion compose_compiler_version
This usually means your app isn’t able to connect to the SMTP server. It might seem like a PHPMailer issue, but most of the time it’s a network problem on the server.
If your app is hosted somewhere that has support, I recommend reaching out to them. Ask them to check and make sure that port 465 is open for sending emails using SMTP with SSL.
So the problem I continuously have with subprocess.run is that it opens a subshell whereas os.system runs in my current shell. This has bitten me several times. Is there a way in subprocess to execute without actually creating the subshell?
I am very late to the party, but I wonder if this could work: https://rdrr.io/cran/spatialEco/man/sf_dissolve.html
I am not sure whether this dissolving by features can be implemented with sf functions, happy to learn from more experienced people around :)
Be careful using that ID. It's an incremental ID, not a random ID.
Is it OK to use it? Well... If you need a RANDOM id, so, don't use it.
I found the problem. I was using the wrong import. I had:
import io.ktor.http.headers
But it should be:
import io.ktor.client.request.headers
Thanks it worked for me as well
problem solved when run command on Git bash program not powershell program
keytool -exportcert -alias YOUR_ALIAS -keystore YOUR_KEYSTORE_PATH | openssl sha1 -binary | openssl base64
YOUR_ALIAS – your keystore alias
YOUR_KEYSTORE_PATH – the path to your .keystore file
There's no easy answer to this issue. The only way to solve it is by implementing the custom domain into the applications and Azure AD B2C. This issue is also known by OpenID Connect: https://openid.net/specs/openid-connect-frontchannel-1_0.html#ThirdPartyContent; basically, many browsers block the cookie value from other websites.
You can check the microsoft documentation too: https://learn.microsoft.com/en-us/entra/identity-platform/reference-third-party-cookies-spas
To solve it, you need to use a custom domain. In my case, it's something I will need, so it becomes a bit convenient. My Azure AD B2C is using a new subdomain called login.mydomain.com, and my apps are at app1.mydomain.com and app2.mydomain.com. So when the iframe calls app1.mydomain.com/logout, the session is revoked as well, and every logged user/cache is cleared.
What ended up working (at least for what I need) is actually using =DATEDIF(A11, today(), "D") and then dividing that number of days by 30.4347. I got the 30.4347 by dividing 365.25 (ignoring centurial years, (365+365+365+366)/4 = 365.2425) by 12 months.
To use the package after installation, first verify it's installed by running pip list. Then, add its path to sys.pathto import and use it normally.
If a package is installed but not automatically added to Python's path, you can manually include its directory in sys.path.
What I'm interested in is understanding why I get that error. Is it something I'm doing wrong, I'm missing something or is it a bug in gold?
You aren't missing anything (at least nothing relevant to the linkage failure)
and aren't doing anything wrong. There is a corner-case bug in ld.gold.
Repro
I have your progam source in test.cpp. I haven't installed the header-only libraries
spdlog or fmt; I've just cloned the repos for the present purpose.
$ g++ --version | head -n1
g++ (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
$ ld.gold --version | head -n1
GNU gold (GNU Binutils for Ubuntu 2.42) 1.16
$ export CPATH=$HOME/develop/spdlog/include:$HOME/develop/fmt/include
$ g++ -c test.cpp
Link without -gc-sections:
$ g++ test.o -fuse-ld=gold -static; echo Done
Done
And with -gc-sections:
$ g++ test.o -fuse-ld=gold -static -Wl,-gc-sections; echo Done
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x74): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
collect2: error: ld returned 1 exit status
Done
Trying other linkers
Besides gold, -fuse-ld recognises three other ELF linkers. Let's try them all at
that linkage:
ld.bfd (the default GNU linker)
$ ld.bfd --version | head -n1
GNU ld (GNU Binutils for Ubuntu) 2.42
$ g++ test.o -fuse-ld=bfd -static -Wl,-gc-sections
$ ./a.out
Hello, fmt!
[2025-05-06 18:17:10.378] [info] Hello, spdlog!
ld.lld (the LLVM linker)
$ ld.lld --version | head -n1
Ubuntu LLD 18.1.6 (compatible with GNU linkers)
$ g++ test.o -fuse-ld=lld -static -Wl,-gc-sections
$ ./a.out
Hello, fmt!
[2025-05-06 18:18:21.994] [info] Hello, spdlog!
ld.mold (the Modern linker)
$ ld.mold --version | head -n1
mold 2.30.0 (compatible with GNU ld)
$ g++ test.o -fuse-ld=mold -static -Wl,-gc-sections
$ ./a.out
Hello, fmt!
[2025-05-06 18:22:47.597] [info] Hello, spdlog!
So gold is the only one that can't link this program.
What is gold doing wrong?
The first diagnostic:
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x14): \
error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
is reporting that the relocation target at offset 0x14 in section .note.stapsdt of object file
libc.a(pthread_create.o) refers to the local symbol .text, which is symbol #1 in that object
file, and that this relocation can't be carried out because the section in which that symbol is
defined has been discarded.
The second diagnostic is the just same, except that the relocation target this time is at offset 0x74, so we'll just pursue the first diagnostic.
Let's check that it's true.
First get that object file:
$ $ ar -x $(realpath /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a) pthread_create.o
Check out the relocations for its note.stapsdt section:
$ readelf -rW pthread_create.o
...[cut]...
Relocation section '.rela.note.stapsdt' at offset 0x46f8 contains 4 entries:
Offset Info Type Symbol's Value Symbol's Name + Addend
0000000000000014 0000000100000001 R_X86_64_64 0000000000000000 .text + 40e
000000000000001c 0000003000000001 R_X86_64_64 0000000000000000 _.stapsdt.base + 0
0000000000000074 0000000100000001 R_X86_64_64 0000000000000000 .text + c7b
000000000000007c 0000003000000001 R_X86_64_64 0000000000000000 _.stapsdt.base + 0
...[cut]...
Yes, it has relocation targets at offset 0x14 and 0x74. The first one is to be patched using the address
of symbol # 1 ( = Info >> 32) in the symbol table (which we're told is .text) + 0x40e. Symbol #1 in pthread_create.o is
$ readelf -sW pthread_create.o | grep ' 1:'
1: 0000000000000000 0 SECTION LOCAL DEFAULT 2 .text
indeed local symbol .text, (a section name) and it is defined in section #2 of the file
which of course is:
$ readelf -SW pthread_create.o | grep ' 2]'
[ 2] .text PROGBITS 0000000000000000 000050 001750 00 AX 0 0 16
the .text section.
So the diagnostic reports that gold has binned the .text section of pthread_create.o. Let's
ask gold to tell us what sections of pthread_create.o it discarded.
$ g++ test.o -fuse-ld=gold -static -Wl,-gc-sections,-print-gc-sections 2>&1 | grep pthread_create.o
/usr/bin/ld.gold: removing unused section from '.text' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.data' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.bss' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata.str1.1' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata.str1.8' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.text.unlikely' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata.str1.16' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata.cst4' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata.cst8' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x74): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
It discarded 10 of the:
$ readelf -SW pthread_create.o | head -n1
There are 23 section headers, starting at offset 0x48c0:
23 sections in the file, including .text, as compared with:
$ g++ test.o -fuse-ld=bfd -static -Wl,-gc-sections,-print-gc-sections 2>&1 | grep pthread_create.o
/usr/bin/ld.bfd: removing unused section '.group' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.bfd: removing unused section '.stapsdt.base[.stapsdt.base]' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.bfd: removing unused section '.rodata.cst4' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.bfd: removing unused section '.rodata' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
the 4 sections discarded by ld.bfd, excluding .text. gold also retains 2 sections (.group,stapsdt.base) that bfd discards, but the outcome
says that gold has chucked out a baby with the bathwater.
The linkage error is a (all but) a false alarm
The retention of section .note.stapsdt from pthread_create.o sets it off. This
section is retained because any output .note.* section will be a GC-root section for
any linker: .note sections are conventionally reserved for special information to
be consumed by other programs, and as such are unconditionaally retained in the same
way as ones defining external symbols. note.stapsdt sections in particular are emitted to expose
patch points for the runtime insertion of Systemtap
instrumentation hooks.
Presumably, you don't care if this program has Systemtap support. You've just got
it because it's compiled into pthread_create.o (and elsewhere in GLIBC). The
enabling .note.stapsdt section it a GC-root section in pthread_create.o that
references its .text section. But your program has no functional need for that
.text section. We can observe this by just blowing through the linkage failure
with:
$ rm a.out
$ g++ test.o -fuse-ld=gold -static -Wl,-gc-sections,--noinhibit-exec
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x74): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
--noinhibit-exec tells the linker to output a viable image if it can make one, notwithstanding
errors. And in this case:
$ ./a.out
Hello, fmt!
[2025-05-07 10:51:57.987] [info] Hello, spdlog!
The .text section of pthread_create.o is garbage-collected; the linkage errors,
but the program is perfectly fine.
So we'd expect a clean linkage if we yank .note.stapsdt out of pthread_create.o
and interpose the modified object file in the link, and so we do:
$ objcopy --remove-section='.note.stapsdt' pthread_create.o pthread_create_nostap.o
$ g++ test.o pthread_create_nostap.o -fuse-ld=gold -static -Wl,-gc-sections
$ ./a.out
Hello, fmt!
[2025-05-07 11:08:27.647] [info] Hello, spdlog!
The program is fine without the .note.stapsdt and/or the .text section of
pthread_create.o, but Systemtap would not be fine with the program. That's
the cash value of the linkage failure.
The linkage error has nothing to do with your particular program.
Check out this deranged linkage:
$ cat main.c
int main(void)
{
return 0;
}
$ gcc main.c -static -Wl,-gc-sections,--whole-archive,-lc,--no-whole-archive
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(dso_handle.o):(.data.rel.ro.local+0x0): multiple definition of `__dso_handle'; /usr/lib/gcc/x86_64-linux-gnu/13/crtbeginT.o:(.data+0x0): first defined here
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(rcmd.o): in function `__validuser2_sa':
(.text+0x5e8): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(rcmd.o): note: the message above does not take linker garbage collection into account
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(dl-reloc-static-pie.o): in function `_dl_relocate_static_pie':
(.text+0x0): multiple definition of `_dl_relocate_static_pie'; /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/crt1.o:(.text+0x30): first defined here
collect2: error: ld returned 1 exit status
where I'm trying and failing to make a garbage-collected static linkage of the whole of GLIBC into a do-nothing program, with the default linker.
Now let's repeat the failure with gold:
$ gcc main.c -fuse-ld=gold -static -Wl,-gc-sections,--whole-archive,-lc,--no-whole-archive
/usr/bin/ld.gold: error: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(dso_handle.o): multiple definition of '__dso_handle'
/usr/bin/ld.gold: /usr/lib/gcc/x86_64-linux-gnu/13/crtbeginT.o: previous definition here
/usr/bin/ld.gold: error: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(dl-reloc-static-pie.o): multiple definition of '_dl_relocate_static_pie'
/usr/bin/ld.gold: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/crt1.o: previous definition here
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_cond_destroy.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_cond_init.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x74): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_join_common.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_join_common.o)(.note.stapsdt+0x5c): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_destroy.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_init.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_timedlock.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_timedlock.o)(.note.stapsdt+0x68): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_timedlock.o)(.note.stapsdt+0xbc): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_timedlock.o)(.note.stapsdt+0x11c): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_rwlock_destroy.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(____longjmp_chk.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(____longjmp_chk.o)(.note.stapsdt+0x64): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
collect2: error: ld returned 1 exit status
Now we're sprayed with:
libc.a(???.o)(.note.stapsdt+???): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
errors that weren't there before, the great majority of the ???.o being pthread_???.o.
How does gold come to disregard the note.stapsdt references into .text
in pthread_create.o?
To understand that I had to get the binutils-gdb source code,
study the gold source and debug a build of it on the problem linkage with ad-hoc diagnostics added. Here is the gist.
gold's GC algorithm initally reserves a set of GC-root sections in the pre-GC linkage to be retained
unconditionally. These include the section that contains the _start symbol (or
other non-default program entry symbol), plus all sections that match a hard-coded
set of prefixes or names, including all .note.* sections. So pthread_create.o(.note.stapsdt) is one of them.
For each section src_object.o(.src_sec) of each object file linked - provided it is
a type-ALLOC section - GC maps that section to list of the relocations ( = references) from src_object.o(.src_sec)
into any other input section dest_object.o(.dest_sec), so that if src_object.o(.src_sec) is
retained then dest_object.o(.dest_sec) will also be retained. An ALLOC section here
means one that will occupy space in the process image, as indicated by
by flag SHF_ALLOC set in the section header. This property can be taken to mean that the section
would be worth garbage collecting. The algorithm
discovers the relocations by reading the corresponding relocations section src_object(.rel[a].src_sec).
Then, starting with the GC-root sections, the algorithm recursively determines for each retained section what other sections its refers to, as per its associated relocations, and adds the sections referred to to the retained list. Finally, all sections not retained are discarded.
This is all as should be, except for the winnowing out of sections that are
not type ALLOC from relocations gathering. That is a flaw, because a .note.* section, depending
on its kind, might be type ALLOC (e.g. .note.gnu.property, .note.ABI-tag in this linkage) or it might not
(e.g. .note.gnu.gold-version, note.stapsdt in this linkage), and being non-ALLOC does not
preclude it having relocations into ALLOC sections. The bug will sleep soundly
as long as a non-ALLOC .note.* section that is winnowed out of
GC relocations processing does not contain relocations.
Section pthread_create.o(.note.stapsdt) is non-ALLOC:
$ readelf -SW pthread_create.o | egrep '(.note.stapsdt|Section|Flg)'
Section Headers:
[Nr] Name Type Address Off Size ES Flg Lk Inf Al
[ 8] .note.stapsdt NOTE 0000000000000000 001928 0000c8 00 0 0 4
[ 9] .rela.note.stapsdt RELA 0000000000000000 0046f8 000060 18 I 20 8 8
(Flg A not set), but does have relocations. So the
bug bites. The GC algorithm never sees the associated
relocations in rela.note.stapsdt that refer to pthread_create.o(.text). When it finds that
pthread_create.o(.note.stapsdt) is non-ALLOC it just skips over .pthread_create.o(.rela.note.stapsdt)
without further ado.
Thus GC never records that pthread_create.o(.note.stapsdt) - retained -
refers to pthread_create.o(.text), and since nothing else refers to pthread_create.o(.text),
it is discarded. When time comes to apply relocations to pthread_create.o(.note.stapsdt),
the section they refer to is no longer in the linkage.
A comment in file binutils-gdb/gold/reloc.cc explaining the flawed
winnowing test:
// We are scanning relocations in order to fill out the GOT and
// PLT sections. Relocations for sections which are not
// allocated (typically debugging sections) should not add new
// GOT and PLT entries. So we skip them unless this is a
// relocatable link or we need to emit relocations. FIXME: What
// should we do if a linker script maps a section with SHF_ALLOC
// clear to a section with SHF_ALLOC set?
illuminates how .note.stapsdt sections fall through the cracks. It is
unclear to me why this a priori logic should be allowed to prevail over
contrary evidence that a non-ALLOC .somesec section does have relocations as
provided by the existence of a .rel[a].somesec section. If such
non-ALLOC sections were acknowledged they would need to be
deferred for special "inverted" GC-handling: Instead of taking their retention
to entail the retention of any sections that they transitively refer to,
GC would need to determine what other sections are to be discarded without reference to the non-ALLOC ones
and then also discard all the non-ALLOC ones that refer only to already discarded sections.
The open FIXME is pointed in our context because it foresees the a priori logic
coming unstuck, but not in quite the way that we observe.
Is there a gold workaround?
That code comment kindles hope that we might dodge the bug if we were either to:-
-r|--relocatable), static, garbage-collected preliminary
linkage of test.o, then statially link the resulting object file
into a program, requesting -nostartfiles to avoid linking the startup code twice.or:-
-q|--emit relocs, even though we don't want to emit relocations.But gold will not play with either of these desperate ruses. The first one:
$ g++ -o bigobj.o test.o -fuse-ld=gold -static -Wl,-r,--entry=_start,-gc-sections
/usr/bin/ld.gold: error: cannot mix -r with --gc-sections or --icf
/usr/bin/ld.gold: internal error in do_layout, at ../../gold/object.cc:1939
collect2: error: ld returned 1 exit status
And the second one:
$ g++ test.o -fuse-ld=gold -static -Wl,-q,-gc-sections
/usr/bin/ld.gold: internal error in do_layout, at ../../gold/object.cc:1939
collect2: error: ld returned 1 exit status
Both of them work with ld.bfd, where they're not needed (They also work with mold, and
both fail with lld). AFAICS the only remedies that work for gold are the ones we've already seen: either link with --noinhibit-exec,
or else use objcopy to make santitised copies of the problem object files from
which redundant note.stapsdt sections are deleted. At a stretch these might be called workarounds,
but hardly gold workarounds. Obviously a reasonable person would give up
on gold and use one of the other linkers that just works (as indeed you are resigned to do).
Reporting the bug will likely be thankless because gold is moribund, as @Eljay commented.
Something you are maybe missing (though not relevantly to the linkage failure).
The linkage option -gc-sections is routinely used in conjunction with the
compiler options -ffunction-sections and fdata-sections. These respectively direct the
compiler to emit each function definition or data object definition in a section by
itself, and that empowers GC to work unhandicapped by facing unreferenced
definitions that it cannot discard because they are located in sections that also contain
referenced definitions.
In object code from which template instantiations are altogether absent
or not prevalent, omitting -ffunction-sections, fdata-sections at compilation
will normally render the pay-off of -gc-sections considerably sub-optimal. If
template instantiations are prevalent the
handicap is mitigated pro rata to their prevalence by the fact that the C++ compiler for
technical reasons places
template instantiations in their own sections anyway. The handicap is further mitigated by
optimisation level, so for a C++ program made almost entirely of template instantiations such
as yours, with -O3 optimisation, -ffunction-sections, fdata-sections at compilation
may have little to no benefit on GC. But as a rule they will produce a GC benefit and the
only effect they can have is for the better.
updated version
String.format("%05d", num)
I don't know your file tree so I can't know if my answer is correct or not.
Try instead of '/assets/images/header/background.webp'
to use './assets/images/header/background.webp'.
I think the web may be redirecting you, which generates the error. if you use the -L flag “follow redirects” it should work.
#!/bin/bash
echo "Downloading Visual Studio Code..."
curl -L -o VScode.zip https://code.visualstudio.com/sha/download?build=stable&os=darwin-universal
To fix this problem please set in Windows PowerShell a environment variable PYTHON_BASIC_REPL to any value for example:
$env:PYTHON_BASIC_REPL = 'True'
and then call python.exe
Than you can reach all characters entered with AltGr
I had the similar problem and i am wondering. There should also be a bugfix included in the new 3.13.4 relase. This is for all those who encountered the problem and happened to come across this page via google.
Many regards
It may caused by wsgi.py and server settings and make sure you added the app name in settings.py in INSTALLED_APPS.
My prefer for django projects is render.com . You can try it for free
EDIT: i figured out the issue. The problem is that new_line really needs to point to a new line (green line in PR view). If its not green i have to supply both new_line and old_line. If its a red line i have to supply old_line
Thank you!
I tried using a token instead of user/pass, and it's been working longer, but I need to keep an eye on it. However, this does not explain why my other services are working; only this one is disconnecting and is not showing any errors.
First, I created a token with
influx auth create \
--org <ORG_NAME> \
--read-buckets \
--write-buckets \
--description "Token for My App"
and then
InfluxDBClient(url="http://localhost:8086", token="my-token", org="my-org")
nvm use 16.13
Make sure to use in every terminal related to running the project
Delete node_modules files
Restart metro and run again
yarn start --reset-cache
yarn android
I took me a while to realize this, but in my case I actually had to head to Output and then select "Python Test Log":
greateI'm used to building data viz in Redash or Grafana, which both have the workflow "New Dash" --> "New Chart" --> [write some SQL, choose visualization options for the output] --> done. For a new work project, I have to build a dash in Looker Studio instead.
I had the same case, make sure the timestamp you're passing is the current timestamp, it can't be the same as on the previous notification, or it will be discarded by the system as "outdated"
From today again it started. Team anyone has permanent fix for this issue please help me out
<div class="youtube-subscribe">
<img src="https://yt3.ggpht.com/ytc/AAUvwniG-oe9jIj-TP4N1ez8QRHlvLgCxjLPg8tNcw=s88-c-k-c0x00ffffff-no-rj" alt="Channel Logo" class="channel-logo">
<div class="g-ytsubscribe" data-channelid="UC5m6LJBqCl6VF9nZPUr7cuA" data-layout="default" data-count="default"></div>
</div>
<script src="https://apis.google.com/js/platform.js"></script>
I managed to get it working without using workerSrc. I only needed the following import:
import 'pdfjs-dist/build/pdf.worker.mjs'
This is also mentioned in a comment on the react-pdf github issue
Please note that there are two "RST" buttons. One on the motherboard and one on the cam board. When using the IO0 long press and short RST press methods above, be sure to use the RST button on the cam board. The RST button on the motherboard does NOT work.
in my case -f was already within the code source="$(readlink -f "${source}")" so i went to target build settings -> search for 'User script sandbox' -> set to No
This is working solution, from @Joe B example:
<script setup lang="ts">
import { Form, Field, ErrorMessage } from 'vee-validate';
import * as yup from 'yup';
const schema = yup.object({
accept: yup.bool(),
});
function onSubmit(values) {
console.log(JSON.stringify(values, null, 2));
}
</script>
<template>
<div id="app">
<Form :validation-schema="schema" @submit="onSubmit" v-slot="{ values }">
<Field
name="accept"
type="checkbox"
v-bind="field"
:value="true"
:unchecked-value="false"
/>
<ErrorMessage name="accept" />
<button>Submit</button>
<p>Values</p>
<pre>{{ values }}</pre>
</Form>
</div>
</template>
An alternative approach would be:
if ! command -v scp >/dev/null 2>&1; then
echo "scp could not be found."
exit 1
fi
MSFT,
I have IE 11 browser but when I run your code I have an error on Methode Document of Object 'IWebBrowser2'. Could you help me, please? Thank you!
I feel like Indiana Jones, excavating this quite old question. ⛏️
But for those stumbling here, after looking for their symptoms online, please rejoice, because it's now fixed.
I don't know if it was fixed by a release of Safari, or Shaka Player, or both, but it has been fixed at some point.
You're welcome.
You should just be able to split up the negative lookbehind into multiple lookbehinds like so
(?<!AA)(?<!BAB)(?<!CAC)(?<!JHGS)[1-3]\s*(?=[A-Z])
Just running command ssh -V this would ensure all OpenSSH suites is installed on your machine https://www.openssh.com
You should redirect the user to Tournament.cshtml from the Register.cshtml. Also, make sure to check whether the user has already created a tournament upon their first login. This ensures that the tournament creation step is not bypassed, especially since you've mentioned that every user must create a tournament.
For SQL Server 2005+
SELECT SERVERPROPERTY('productversion') productversion,
SERVERPROPERTY ('productlevel') productlevel,
SERVERPROPERTY ('edition') edition
in your /ios folder, run this command:
pod init
pod install
Ok, got it work.
Directory structure, there seems to be problem with location of you php.conf file. My dir structure is:
root@hi /home/test/stackoverflow_79617562 $ ls -laR
.:
total 20
drwxr-xr-x 4 root root 4096 May 12 16:25 ./
drwxr-x--- 9 jjakes jjakes 4096 May 12 16:01 ../
-rw-r--r-- 1 root root 342 May 12 16:23 docker-compose.yml
drwxr-xr-x 3 root root 4096 May 12 16:29 nginx/
drwxr-xr-x 2 root root 4096 May 12 16:29 src/
./nginx:
total 12
drwxr-xr-x 3 root root 4096 May 12 16:29 ./
drwxr-xr-x 4 root root 4096 May 12 16:25 ../
drwxr-xr-x 2 root root 4096 May 12 16:29 conf.d/
./nginx/conf.d:
total 12
drwxr-xr-x 2 root root 4096 May 12 16:29 ./
drwxr-xr-x 3 root root 4096 May 12 16:29 ../
-rw-r--r-- 1 root root 491 May 12 16:23 php.conf
./src:
total 12
drwxr-xr-x 2 root root 4096 May 12 16:29 ./
drwxr-xr-x 4 root root 4096 May 12 16:25 ../
-rwxrwxrwx 1 root root 20 May 12 16:29 index.php*
so try moving your php conf to ./nginx/conf.d/php.conf file.
I found a solution for it, I saved my code files, then deleted the project's repository, and then I reinstall everything from zero (even nodeJs), then put my code files again at their right places, and it worked perfectly.
This is possible using the more recent :nth-child( of ) syntax:
.common-class:nth-child(1 of :not(.ignore))
In my case, updating the version to flutter_web_auth_2: ^5.0.0-alpha.2 solved the problem.
https://github.com/ThexXTURBOXx/flutter_web_auth_2/issues/157
So, an answer that most people miss, especially on the older CentOS platforms, this can be as simple as your using the wrong username in your `sudo -u <username>` command. If that user does not exist, you get this error which is mighty confusing, and no reinstall or change of configuration will fix it!
I think this issue might also be caused by the chat template used with the Llama model. For example, the LLaMA 3.2 template includes the instruction:
"Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt."
Leaving it unchanged can effectively force the model to call tools, even when unnecessary.
Note that the template of the llama3-groq-tool-use model @LaurieYoung mentioned is less "agressive":
You may call one or more functions to assist with the user query.
As others have already pointed out, it's preferable to use element IDs or classes rather than XPath, since XPath can easily break with even minor page modifications (e.g., temporary messages indicating downtime - which happens quite often!).
Additionally, be cautious with browser navigation actions like "back" or "forward." These often lead to unexpected behavior and can disrupt the session on the Handelsregister website, causing XPath selectors to fail because you're no longer on the intended page.
Ultimately, we found it easier to avoid scraping Handelsregister directly and instead switched to structured data providers like handelsregister.ai
Good luck!
Email clients (like Gmail, Outlook, Apple Mail) do not support custom URI schemes like that one for Spotify (security reasons I guess), once inside the web, there should be a prompt to open the link on the app.
I needed to extract a part from a name where different sections were separated by a dash. In fact the name contained always five dashes.
RegEx will surely work, but I agree it makes it look more complicated than it need to be.
$SignificantPartOfName = $($NameContainingFiveDashes.Split('-')[4]).trim()
Sincerely AndreasM
Got the issue i wasn't using the right SHA-1 hash.
To get the right one i had to go in the android folder and run
./gradlew signingReport
And took the first one
correct your PrismaClient import to
import { PrismaClient } from "@/generated/prisma";
To suppress the need of a password, please add to the jupyter
command line
--ServerApp.password=''
My issue is resolved by doing the following
Open IIS
Go to current server – > Application Pools
Select the application pool your 32-bit application will run under
Click Advanced setting or Application Pool Default
Set Enable 32-bit Applications to True
reset IIS
if you use authentification ,just check your middleware , you probably protected some pages , you should make the path '/api/uploadthing' public so uploadthing can use it
The prompt you define in LangChain is just the content.
However, the chat template defines the structure - how that content is wrapped and presented to the model - and can strongly influence behavior.
This issue is likely caused by the chat template used with your model. Some templates are designed to encourage tool use. For example, the LLaMA 3.2 template includes the instruction:
"Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt."
This can effectively force the model to call tools, even when unnecessary.
To fix this, adjust your chat template to clearly state that tools should only be used when necessary, and that direct answers should be preferred when possible.
You can toggle a setting to "Show query results in new tabs" under Tools -> Preferences... -> Database -> Worksheet. You can use worksheets to display query results. Toggle this property on and see if you achieve the result
Found a solution that closes my question:
@Stable
val Arrangement.LastItemBottomArrangement: Arrangement.Vertical
get() = object : Arrangement.Vertical {
override fun Density.arrange(
totalSize: Int,
sizes: IntArray,
outPositions: IntArray,
) {
var currentOffset = 0
sizes.forEachIndexed { index, size ->
if (index == sizes.lastIndex) {
outPositions[index] = totalSize - size
} else {
outPositions[index] = currentOffset
currentOffset += size
}
}
}
}
LazyColumn(
modifier = modifier
.fillMaxSize(),
verticalArrangement = Arrangement.LastItemBottomArrangement,
)
Did you find any solution? I am dealing with the same issue
I had to change the origin paths under Options > Source Control >Git Repository Settings > Remotes and then the push succeeded.
As my knowledge you don't need to configure PrometheusMeterRegistry if you are using spring boot 3.x. You just need dependencies for metrics which you already have.
Make sure that you have enabled exemplars in prometheus https://prometheus.io/docs/prometheus/latest/feature_flags/#exemplars-storage
And in the promql in grafana you will have toggle to enable exemplars for specific query.
Finally make sure that you have configured connections in Grafana: Home -> Connections -> Data Sources -> Prometheus/Tempo/Loki (If you cannot edit connections, then you need to make it editable in grafana deploy configs).
Doesn't seem to be considered secret. Presumably the api key is tied to the registered domain and the license for self-hosted is just to ensure GPL compliance.
https://www.tiny.cloud/tinymce/security/
Note: Tiny does not consider an API Key to be private or sensitive data.
https://www.tiny.cloud/docs/tinymce/latest/license-key/
What is the difference between a license key and the API key?
The API key is used when loading TinyMCE from the Tiny Cloud. The license key is used to declare the license terms when self-hosting TinyMCE.
Should I be using both an API key and a license key?
No, an API key and a license key should not be used simultaneously. The API key should only be used if TinyMCE is loaded from the Tiny Cloud. If TinyMCE is being self-hosted, the license key option should be used instead.
Will TinyMCE “phone home” to check the license key?
No. TinyMCE does not contact any server to validate the license key.
What happens if I don’t provide a valid license key?
The console log message or notification will persist until a valid license key or 'gpl' is provided.
Why is a license key required?
The license key ensures compliance with TinyMCE licensing terms. It’s part of our efforts to refine the license key system and may have additional functionalities in the future.
Use
TO Solve captcha or if you want to explicitly Bypass captcha use https://netnut.io/?utm_medium=organic&utm_source=google
They have a product called Unblocker use it i've tested and it works wonders.
| header 1 | header 2 |
|---|---|
| cell 1 | cell 2 |
| cell 3 | cell 4 |
This bug was resolved in Aspose.Cells NuGET version 25.5.0. Thanks to the Aspose team for the quick turnaround!
You need to remove the old plist reference from target and then set the new target for info.plist.
Targets will show in build settings
I have the same problem. I have to work on a code my school sent me as an assignment, where they asked me to work with java 1.8, but the gradle version is 4.10.3 and if I understood correctly, I should either work with a 2.0 version of gradle or I should update java to version 9. Is it correct? Unfortunately I can't find the 2.0 version and I don't know how to solve this problem
distributionUrl=https\://services.gradle.org/distributions/gradle-4.10.3-all.zip
Have you found any solution to address your issue. We are facing the same.
Thanks, Best
I've had a similar question a long time ago and created this gem https://github.com/codesnik/calculate-all/
with it you can do
Model.group(:status).calculate_all(:max_id, :min_id)
It is just a convenient wrapper around pluck, though.
I have a beautiful trick, type in pip --version and it show where the current pip is, which will show the path of the virtual env for you to know
The problem is solved. I had to fix the RampartSender class from the rampart-core_1.6.1.wso2v40 library.
I know this post is from two years ago, but since I'm currently maintaining Qt translations and we're working on the idea of such a migration tool, I'm interested in learning more about your approach.
Could you elaborate on why you wanted to perform this migration in the first place, and how you went about it?
I also noticed you're using //= meta strings, could you share why you are using this?
Aujourd'hui, je souhaite partager un petit retour d'expérience avec la communauté des développeurs — un bug que j'ai rencontré lors de l'exécution de migrations dans un projet Symfony 🎯.
En lançant la commande :
php bin/console doctrine:migrations:status
j’ai reçu l’erreur suivante :
❌ DoctrineMigrationsBundle requires DoctrineBundle to be enabled.
Après quelques recherches, j’ai compris que le bundle principal DoctrineBundle n'était pas enregistré dans mon fichier bundles.php. Résultat : impossible d'utiliser le système de migrations.
✅ Voici les étapes que j’ai suivies pour corriger le problème :
✔️ J’ai vérifié que le fichier config/bundles.php contenait bien :
Doctrine\Bundle\DoctrineBundle\DoctrineBundle::class => ['all' => true],
🧩 J’ai installé le bundle manquant (au cas où) avec :
composer require doctrine/doctrine-bundle
🔄 Ensuite, une autre erreur est apparue :
The metadata storage is not up to date...
J’ai simplement exécuté :
php bin/console doctrine:migrations:sync-metadata-storage
✅ Et tout est rentré dans l’ordre ! Mes migrations sont désormais bien reconnues et exécutables.
💡 Ce genre de souci peut arriver facilement quand on commence avec Symfony + Doctrine.
Mais chaque bug est une opportunité d’apprendre et de mieux comprendre le fonctionnement interne du framework.
👨💻 Si ça peut aider quelqu’un qui débute ou rencontre la même galère, je suis content d’avoir partagé ça !
I fetch the branch associated with the pull request and check it out with the following commands:
git fetch origin pr-branch-name
git checkout pr-branch-name
Then, in IntelliJ, I open the Git Log GUI, right-click the merge base, and select Compare with Local, which allows me to explore the changes in IntelliJ. However, I always keep Bitbucket open in parallel if I want to comment on specific changes.
For me, this approach offers a satisfactory balance between using IntelliJ's rich features while keeping the PR review process simple.
Thanks for the Insights @Vivek Vaibhav Shandilya.
The issue was resolved after realizing that environment variables defined in local.settings.json are not picked up inside a Docker container. This file is only used by the Azure Functions Core Tools (func start) during local development — not in Docker runtime.
All required environment variables were added directly to the Dockerfile using ENV, including below.
ENV AzureWebJobsStorage="your-connection-string" \
BlobStorageConnectionString="your-connection-string" \ FUNCTIONS_WORKER_RUNTIME=python \ AzureWebJobsFeatureFlags=EnableWorkerIndexing
Alternatively, you can pass them via the docker run command.
docker run -p 7071:80 \ -e AzureWebJobsStorage="your-connection-string" \ -e BlobStorageConnectionString="your-connection-string" \ -e FUNCTIONS_WORKER_RUNTIME=python \ -e AzureWebJobsFeatureFlags=EnableWorkerIndexing \ my-func
Please tell me if this is what you want. If it is, I will edit answer to include description and docs source.
.parent {
display: flex;
}
.child1 {
background: #ddd;
padding: 1rem;
}
.child2 {
background: #eee;
padding: 1rem;
display: flex;
flex-direction: column;
}
.timeslots {
flex-basis: 0;
flex-grow: 1;
overflow-y: auto;
}
<div class="parent">
<div class="child1">
Tall content here (sets height)<br>
Tall content here (sets height)<br>
Tall content here (sets height)<br>
</div>
<div class="child2">
<div class="timeslots">
Lots of scrollable content here...<br>
Lots of scrollable content here...<br>
Lots of scrollable content here...<br>
Lots of scrollable content here...<br>
Lots of scrollable content here...<br>
Lots of scrollable content here...<br>
Lots of scrollable content here...<br>
Lots of scrollable content here...<br>
</div>
</div>
</div>