The autoplot method for class 'tbl_ts' (not 'fbl_ts') allows for variable selection. Just cast the fable into a tsibble before autoplot.
cafe_fc |> lift_fc(lift = 2) |> as_tsibble() |> autoplot(.vars = .mean)
Answering my own question: AFAIK there is no 'proper' EL9 repo hosting libc++ packages
There is, however, a way to build the RPMs so they can be self hosted. I believe this [1] github repo has basically taken the RPM sources from upstream (Fedora) and made them available to build for EL9. There are also binary packages for x84_64 in the github release section, but it's probably not wise to trust those, and just build the RPMs yourself.
I'd be happy to retract this answer if there was a 'proper' EL9 repo to avoid the self build and host option. I'd also be interested if anyone knows the reason for the fact there is no official EL9 libcxx package.
enter link description herehere is a quick solution while using expo version 53
Excuse me were you able to use the exchange microsoft email in Java Mail?
Provided you don't need to worry about keeping track of calculated nulls, you can make use of null-ish coalescing assignment (??=
).
function memoize(result) {
const cache = {};
return function() {
cache[result] ??= calculate(result);
return cache[result];
};
}
It seems that the NumPy maintainers decided it was best to not deprecate these conversions. It was:
Complained about in this issue: https://github.com/numpy/numpy/issues/23904
Resolved in this PR: https://github.com/numpy/numpy/pull/24193
And integrated into NumPy 2.0.0: https://numpy.org/doc/stable/release/2.0.0-notes.html#remove-datetime64-deprecation-warning-when-constructing-with-timezone
However, it hasn't hit v2.2's documentation: https://numpy.org/doc/2.2/reference/arrays.datetime.html#basic-datetimes
Mind you, a warning is still raised, just a UserWarning that datetime64 keeps no datetime information. So, to answer the question:
OK, so how do I avoid the warning? (Without giving up a significant performance factor)
import warnings
import numpy as np
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=UserWarning)
t = np.datetime64('2022-05-01T00:00:00-07:00') # np.datetime64 has no tz info
If anyone is still having trouble with this issue, I found setting the UPdateSource trigger to LostFocus instead of Propertychanged to work:
Text="{Binding NewAccountBalance,
UpdateSourceTrigger=LostFocus,
StringFormat=C}"
The problem is that you have to use "use client" for NextJs components using React Hooks.
I think I might see what’s going on here. You're getting a StaleElementReferenceException, right? That usually happens when the element you’re trying to interact with is no longer attached to the page — maybe because the page has refreshed or the DOM has changed after switching the radio button.
After selecting the "Rooms Wanted" option and submitting the search, are you sure the search_box element is still the same? Could it be that the page reloads or rerenders that part of the DOM when the radio button is changed?
You should try to re-find the search_box element after switching to the second radio button like this:
rent_button = driver.find_element(By.ID, "flatshare_type-offered")
driver.execute_script("arguments[0].checked = true;", rent_button)
search_box = driver.find_element(By.ID, "search_by_location_field")
search_box.send_keys(postcode, Keys.ENTER)
In my case, I had to explicitly specify @latest to resolve the issue:
npm install --save-dev @types/node@latest
The old methods may not work anymore. This is what is working for me to toggle copilot completions:
Add this snippet to your keybindings.json (Ctrl + Shift + P >>> Preferences: Open Keyboard Shortcuts)
{
"key": "ctrl+shift+alt+o",
"command": "github.copilot.completions.toggle"
}
Ok, so problem was that the redirect url for github must be http://localhost:8080/login/oauth2/code/github by default. After changed it, i can reach /secured (but it wouldn't redirect me there right after login - i need to do it manually)
"نزدیکیوں کا فاصلہ"
نومان کے لیے پاکیزہ صرف ایک دوست نہیں تھی، وہ اُس کی زندگی کا وہ حصہ تھی جس کے بغیر سب ادھورا لگتا۔ پاکیزہ ہر بار اُس کی باتوں پر ہنستی، اُس کا خیال رکھتی، ہر دکھ میں ساتھ کھڑی ہوتی — لیکن جب بات محبت کی آتی، تو خاموش ہو جاتی۔
نومان نے کئی بار چاہا کہ وہ اپنے دل کی بات کھل کر کہے، مگر اُس نے کبھی پاکیزہ پر زور نہیں دیا۔
وہ جانتا تھا، محبت دباؤ سے نہیں، احساس سے پروان چڑھتی ہے۔
پاکیزہ کے دل میں بھی کچھ تھا — لیکن وہ ڈرتی تھی…
شاید کسی پر مکمل بھروسہ کرنے سے،
شاید ٹوٹ جانے کے خوف سے،
یا شاید اس لیے کہ نومان اتنا خاص تھا کہ وہ کھونا نہیں چاہتی تھی۔
ایک شام، بارش میں بھیگتے ہوئے دونوں کافی کے کپ ہاتھ میں لیے ایک بنچ پر بیٹھے تھے۔
نومان نے آہستہ سے کہا:
“پاکیزہ، میں تمہیں مکمل طور پر اپنانا چاہتا ہوں… تم جیسی ہو، ویسی۔ نہ بدلی ہوئی، نہ چھپی ہوئی۔”
پاکیزہ نے نظریں جھکا لیں۔ دل جیسے تیز دھڑکنے لگا۔
“میں تم سے دور رہ کر بھی تمہارے قریب محسوس کرتا ہوں، پاکیزہ۔
بس ایک بار، ایک بار کہہ دو کہ تم بھی چاہتی ہو…”
پاکیزہ خاموش رہی۔ لیکن اُس کی آنکھوں میں ایک نمی سی چمک رہی تھی — جو شاید ‘ہاں’ تھی، مگر الفاظ ڈر گئے تھے۔
وہ بولی:
“نومان… میں تمہارے ساتھ بہت خوش رہتی ہوں، تم پر بھروسہ بھی ہے، لیکن… مجھے محبت سے ڈر لگتا ہے۔
اگر کبھی ٹوٹ گئی تو؟
اگر کبھی تم بدل گئے تو؟”
نومان نے مسکرا کر اس کے ہاتھ تھام لیے:
“اگر ٹوٹ گئی، تو میں سنبھالوں گا۔
اگر کبھی بدلا، تو صرف وقت ہوگا… میں نہیں۔”
پاکیزہ نے آنکھیں بند کر لیں — جیسے وقت رک گیا ہو۔
اور وہ جانتی تھی — فاصلہ چاہے کتنا بھی ہو، دل کبھی دور نہیں تھا۔
انجام:
شاید وہ “ہاں” آج نہ آئی ہو، لیکن کبھی کبھی محبتیں مکمل ہونے کے لیے نہیں، بس سچی ہونے کے لیے ہوتی ہیں۔
You can update the package name and keystore in EAS credentials. If you do this and the app is set up correctly, you should be able to update the app on the store
u dont need to deleted home/USER/.local/solana/install
or something, just delete home/USER/.cache/solana
and u can build or test the anchor program
This case occurs because there is a download/extract/build error in the anchor build/test process
The project runs okay, it's only a typescript error.
Changed the filename Env.ts
-> Env.d.ts
and the error went away...
The guide can be found here:
https://github.com/ScottTunstall/BuildMameWithVS2022/blob/main/README.md
Constructive feedback welcome.
Old thread but this other similar question points to a good solution for your problem.
https://superuser.com/questions/1291939/shortcut-to-change-keyboard-layout-in-linux-mint-cinnamon
Best,
having exact same issue :/ following for answer
this should return the results you expect
df = (spark.read.option('header', True)
.option('multiline', True)
.option('mode', 'PERMISSIVE')
.option('quote', '"')
.option('escape', '"')
.csv('yourpath/CSVFilename.csv'))
display(df)
Okay, I ended up just getting the actual URLs by doing this in the Chrome Dev tools
const tds = document.querySelectorAll('table tbody tr td span.DPvwYc.QiuYjb.ERSyJd');
const urls = Array.from(tds, td => td.getAttribute('data-value'));
copy(urls.join('\n'));
but yeah, seems a bit weird to have an export option that doesn't really give you what you need, and you have to create your own way of making an export 🤷♂️
The error has been not feature scaling the target function / input data set. The algorithm was working fine. Furthermore it helped choosing a different function to model, then the logistic, as its value can differ greatly for small input changes, which made it initially harder.
my api php code:
<?php
ini_set('display_errors', 1);
error_reporting(E_ALL);
$allowed_origins = [
"https://fbc.mysite.ir",
"https://mysite.ir",
"http://localhost:54992",
"http://localhost:55687",
"http://localhost:5173",
"http://127.0.0.1:5173",
];
$origin = $_SERVER['HTTP_ORIGIN'] ?? '';
if (in_array($origin, $allowed_origins)) {
header("Access-Control-Allow-Origin: $origin");
header("Access-Control-Allow-Credentials: true");
}
header("Access-Control-Allow-Methods: GET, POST, OPTIONS, PUT, DELETE");
header("Access-Control-Allow-Headers: Content-Type, Authorization");
if ($_SERVER['REQUEST_METHOD'] === 'OPTIONS') {
http_response_code(200);
exit();
}
This was resolved in the 3.2.4 release https://www.psycopg.org/psycopg3/docs/news.html#psycopg-3-2-4
So, dragging your JPG, PNG, or GIF to the folder of the blue file you are working on does work. You need to copy the path and paste it.
Performance Max Ads use AssetGroups and not AdGroups
There's a new feature in ingress-nginx 1.12
that allows you to filter annotations by risk using annotations-risk-level. Use annotations-risk-level: Critical
to allow allow-snippet-annotations: true.
For further reference you can check this blog and discussion.
painterResource("drawable/logo.png") is deprecated, which instead?
you must always push new yml into main (default) branch, it is the only way how github can detect new workflow and then you can create modified version in branch abc and test its run with github cli (workflow must contain workflow_dispatch: )
gh workflow run ci.yml -r abc
Hey have you found the solution? I am facing the same problem
I changed to using self.eid.followUps.splice(0) and it worked. Using a suggestion from this post (Clearing an array content and reactivity issues using Vue.js)
Did you get any solution about this, can you share me if so? I have the same issues and I have no idea how to fix this 😥
I made PR and a jira issue:
https://issues.redhat.com/projects/UNDERTOW/issues/UNDERTOW-2552
Perhaps in next version.
well solved this issue by giving BYPASSRLS
and CREATDB
. I don't know which one solved.
one more thing i want to add is that while trying to resolve it i managed to get rid of above error and got connection error I think the the changing of role resolved the connection issue but i still fully don't understand what resolved the vector problem because db already had superuser role.
To query and sort events by their next upcoming date from an array of dates in Elasticsearch, you need to combine a range filter with a custom script-based sort. Here's how to achieve this:
Use a range
query to include events with at least one date in the future:
json
"query": { "bool": { "filter": { "range": { "OccursOn": { "gte": "now" } } } } }
This ensures only events with dates occurring now or later are included134.
Use a Painless script in the sort to find the earliest future date in the OccursOn
array:
json
"sort": [ { "_script": { "type": "number", "script": { "lang": "painless", "source": """ long now = new Date().getTime(); long nextDate = Long.MAX_VALUE; for (def date : doc['OccursOn']) { long timestamp = date.toInstant().toEpochMilli(); if (timestamp >= now && timestamp < nextDate) { nextDate = timestamp; } } return nextDate; """ }, "order": "asc" } } ]
Gets the current timestamp
Iterates through all event dates
Identifies the earliest date that hasn't occurred yet
Sorts events ascending by this calculated next date
json
{ "query": { "bool": { "filter": { "range": { "OccursOn": { "gte": "now" } } } } }, "sort": [ { "_script": { "type": "number", "script": { "lang": "painless", "source": """ long now = new Date().getTime(); long nextDate = Long.MAX_VALUE; for (def date : doc['OccursOn']) { long timestamp = date.toInstant().toEpochMilli(); if (timestamp >= now && timestamp < nextDate) { nextDate = timestamp; } } return nextDate; """ }, "order": "asc" } } ] }
Script sorting has performance implications for large datasets
For better performance, consider pre-calculating the next occurrence date during indexing
Use parameterized now
in production for consistent timestamps across distributed systems24
This solution filters events with future dates and sorts them by their earliest upcoming occurrence using Elasticsearch's script sorting capabilities
I think you are asking two different questions:
How can I specify host(s) without an inventory? The answer to this is to use "-i tomcat-webApp, tomcat-all,". You must include the trailing comma after the last hostname.
ansible-playbook DeployWar.yml \
-i tomcat-webApp,tomcat-all,
Reference: Run Ansible playbook without inventory
How can I pass multiple extra-vars from command line?
ansible-playbook DeployWar.yml \
--extra-vars="testvar1=testing1" --extra-vars="testvar2=testing2"
ansible-playbook DeployWar.yml \
--extra-vars="servers=tomcast-webApp tomcast-all"
Then inside your playbook: {{ servers | split }}
Here is what we have configured in our Helm Chart ingress-nginx-4.12.1
to enable config snippets.
proxySetHeaders:
allow-snippet-annotations: "true"
podAnnotations:
ingressclass.kubernetes.io/is-default-class: "true"
allow-snippet-annotations: "true"
I had the same problem. In my case it was caused by the fact that there was no data to send to the PDF. So there was already an earlier error that caused the same error-message.
The fix is to use literal instead of null:
criteriaQuery.set(root.get(MyEntity_.tag), criteriaBuilder.nullLiteral<Tag>(Tag::class.java))
The application expected MediaType.APPLICATION_JSON_VALUE
as you defined in the controller, but you sent extra ;charset=UTF-8
in the Content-Type
header. It is not expected from the Spring to have appropriate mapping.
Either remove extra fragment from the header, or add it to the controller mapping.
The documentation of read_sql_query says the following:
params : list, tuple or mapping, optional, default: None
List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249’s paramstyle, is supported. Eg. for psycopg2, uses %(name)s so use params={‘name’ : ‘value’}.
Since you use the psycopg2 driver the parameters should be noted as @JonSG has mentioned. It should be:
select *
FROM public.bq_results br
WHERE cast("eventDate" as date) between
TO_DATE(%(test_start_date)s, 'YYYYMMDD') AND TO_DATE(%(test_end_date)s, 'YYYYMMDD')
limit 10000
Hope this works.
I ran into this issue too. To make it easier for others to figure out if they're affected, I created a common reliability enumeration (CRE). You can preview the rule here or run it against your logs here.
For me what was happening is that I had duplicate values in the "valueField"--I was passing in all 0's for this value, so I think it naturally went to the first item on the list. I'm using ngModel in Angular so just set the valueField="" and that solved the issue.
(1).
Bitmap bitmap = ((BitmapDrawable) holder.newsImage.getDrawable()).getBitmap();
SecondActivity.bitMap = bitmap;
(2).
public static Bitmap bitmap = null;
if (bitmap!=null){
tvImage.setImageBitmap(bitmap);
}
As Reza mentioned, this is a similar question: List Tiles out of container when scrolling. This behavior is not caused by the ReorderableListView
, but by the ListTiles
. Wrapping them with a Card
widget fixed the issue.
We’ve implemented a Twilio-based WhatsApp integration using .NET Core 6 and deployed the application on IIS running on a Windows Server 2022 machine (client's environment). Outbound messages from our application to Twilio are working correctly.
However, incoming messages from Twilio are not reaching our server/application. We’ve already asked the client to allow traffic from *.twilio.com subdomains, but that doesn’t seem to resolve the issue.
Given that this is a production environment and the client is concerned about security, we cannot request them to open all inbound traffic.
My questions:
What specific IP addresses or subdomains should be whitelisted to allow Twilio's webhook requests (WhatsApp messages) to reach the server?
Are there any additional IIS or firewall configurations we should check to ensure that incoming HTTP requests from Twilio are accepted and routed correctly?
Any guidance on how to properly configure the client's firewall or server to receive these requests securely would be highly appreciated.
@Elchanan shuky Shukrun
Can you provider an example how you got it to work in a pipeline? If I define a string parameter 'payload' it stays empty when using the gitea plug-in.
So I found answer to my problem by searching for a solution a different problem I was running into. Apparently "delete module-info.java at your Project Explorer tab" is what I needed to do. Sorry for bothering everyone.
Not quite what you asked as this just does the current folder, but it is a simple method using basic scripting which you can learn from and build on:
#! /bin/bash
# Create an array of MKV files to process
# Assumes current folder, lists files and feeds them into a variable
files=$(ls *.mkv)
# Loop through the filenames
for filename in ${files[@]}
do
echo $filename
mkvpropedit $filename -d title -t all:
done
The mkvpropedit command featured removes the title and all tags which is what research suggests many people wish to achieve.
The function that feeds the array of files could include paths so would be:
files=$(ls */*.mkv)
Not sure this would handle files or folders with spaces in the names though.
As stated in the documentation:
An easy-to-use web interface
Looker Studio is designed to be intuitive and easy to use. The report editor features simple drag-and-drop objects with fully custom property panels and a snap-to-grid canvas.
An alternative to your approach could be using Custom Queries.
See also:
here is below fixed badge
<img alt="Static Badge" src="https://img.shields.io/badge/Language-Japanese-red?style=flat-square&link=https%3A%2F%2Fgithub.com%2FTokynBlast%2FpyTGM%2Fblob%2Fmain%2FREADME.jp.md">
Shields.io needs this format:
https://img.shields.io/badge/<label>-<message>-<color>
Without all parts, it shows a broken image.
now you can display your badge
document.querySelector(".clickedelement").addEventListener("click", function (e) {
setTimeout(function(){
if(document.querySelector(".my-div-class").style.border !== "none"){
document.querySelector(".my-div-class").style.border = "none";
}else{
document.querySelector(".my-div-class").style.border = "1px solid black";
}
},1000);
});
div {
width: 100px;
height: 100px;
background-color: yellow;
border-radius: 50%;
}
<p class="clickedelement">when this is <b>clicked</b> i want border added and removed (after 1s) on div below</p>
<div class="my-div-class"></div>
You can do it with css.
document.querySelector(".clickedelement").addEventListener("click", function (e) {
// add and remove border on div
});
div {
width: 100px;
height: 100px;
background-color: yellow;
border-radius: 50%;
border: 2px solid #ff000000;
transition: border 2s ease-out;
}
.clickedelement:active~div{
border: 2px solid #ff0000;
transition: border 100ms ease-out;
}
<p class="clickedelement">when this is clicked i want border added and removed (after 1s) on div below</p>
<div class=""></div>
Not directly related, but anyone facing a similar issue now:
snowflake.connector.errors.OperationalError: 254007: The certificate is revoked or could not be validated: hostname=xxxxxxxxx
Upgrading snowflake-python-connector to 3.15.0 helped me resolve it.
Reference: https://github.com/orgs/community/discussions/157821#discussioncomment-12977833
Did you ever figure this out? I'm having the same issue.
you should use the same session of your DF:
df.sparkSession.sql("select count(*) from test_table limit 100")
After removing docker desktop. restart your computer and wsl will revert back to the docker ce version
It seems that I can't comment, so I have to leave another answer.
@rzwitserloot - I like you're very thorough answer about why using existing "stuff" is confusing and difficult to implement.
I like to through out a suggestion though. On the simpler annotations that only generate one thing (NoArgs, AllArgs, and etc.) don't reuse existing annotations. Add a new parameter @NoArgsConstructor( javadoc=" My meaningful Description - blah, blah, blah, \n new line of more blah \n @Author Me \n @Version( 1.1.1)")
This would generate (literally, exactly the text provide except the comment code)
/**
* My meaningful Description - blah, blah, blah,
* new line of more blah
* @Author Me
* @Version( 1.1.1)
*/
Use only minimal interpretation, in my example only the "\n" for a new line and maybe add the tab "\t".
Another option would be to only allow actual Tabs and NewLines inside the quotes and then the only 'processing' would be to add the comment characters.
My justification for this answer is that JavaDoc seems to produce a lot of messages as WARNINGS. It is very easy to miss more obvious problems because they are lost in WARNINGS. I make an effort to clear out warnings so that I don't miss problems.
I understand this may be more difficult than I am making it out, but my goal is to get rid of the warnings so that I don't miss other important messages.
Thanks!
I haven't looked deeply into Lombok code, but this seems like a reasonable solution.
Nice solution. (no new answer. I modified the original question)?
%macro create_table();
PROC SQL;
CREATE TABLE TEST AS
SELECT DATE, NAME,
%DO i = 1 %to 3;
0 AS A&i.,
%END;
1 as B
FROM SOURCE;
QUIT;
%mend create_table;
%create_table();
Can this be expanded to allow a let evaluation within the loop (or something else that will hold a new macro-var)?
I have a large number of columns that in my case look over 13qtr of data, and within each bucket of 40 or so columns 13(40) there are a good number that look back for year ago data.
Thus I need something like:
let iPY = %eval(&i. +4)
but would love to avoid the +4 calc for each needed column in a qtr.
Upgrading to Python 3.13 solved the issue, so the explanation is probably a version incompatibility with Windows 10 and Python 3.12 for this particular case.
You can check and verify the Form Validations.
Check the File Object and the length of the file. Is there any broken image that is not going to upload?
You can create an array of new values that you want to update.
Then create the $user Object and update the values. Hope you will care all the points.
Thanks
Is there a work around for direct query? You cannot use unpivot in direct query from what I have found.
You can create a new Service Account and generate new keys
from dotenv import load_dotenv
import os
load_dotenv()
api_key = os.getenv("BYBIT_API_KEY")
api_secret = os.getenv("BYBIT_API_SECRET")
this works for 1 time series but how would it work for a multiobject set?
We just went just went through the process of trying to get premium data access from Google with no success - it seems that they are extremely strict about which use-cases they give access to. I am building the reputation platform ReviewKite and we eventually decided to use the BrightLocal API under the hood to scrape reviews from Google. It's expensive, but worth it for easy of use.
I got it. The main reason why, after each row, I was getting the null rows was because of hidden characters in the CSV file
Windows ends lines with Carriage Return + Line Feed
(\r\n
)
Unix/Linux uses just Line Feed (\n)
If a file created on Windows is read on a Unix/Linux system (like Hadoop/Hive), the \r
character can:
Show up as "invisible junk" (like ^M
or CR
)
Break parsers or formatters (like Hive or awk), resulting in:
Extra blank rows
All NULL
columns
Malformed data
So that's the reason why I was getting empty null rows after each valid data row,
Sol: I used dos2unix, which converts our files to Linux format, and I got the expected result.
import 'package:firebase_auth/firebase_auth.dart';
I've been stuck on that for a while now, and I've been carrying the fix you mentioned on different version of yocto, I wasn't proud because nobody else did that except me, so the problem was not yocto and was probably coming from elsewhere. When moving to scarthgap this fix was not doing his job anymore, so I had to find the root cause.
I was building my libraries with a Makefile like so:
lib:
$(CC) -fPIC -c $(LDFLAGS) $(CFLAGS) $(CFILES)
$(CC) $(LDFLAGS) -shared -o libwhatever.so.1 $(OFILES)
ln -sf libwhatever.so.1 libwhatever.so
What was missing is that I needed to add:
LDFLAGS += -Wl,-soname,libwhatever.so.1
so that yocto QA thingy is able to actually find the proper names directly inside the .so files.
If you want to verify if that fix is for you, you can check the SONAME like so:
readelf -d tmp/work/<machine>/libwhatever/<PV>/image/usr/lib/libwhatever.so* | grep SONAME
0x000000000000000e (SONAME) Library soname: [libwhatever.so.1]
If you haven't anything from this command then you have the same issue I had and the above fix will work for you.
In flutter iOS, after setup signing and other things from xcode, go back to android studio and run clean and run ios from android studio as first run.
Check dir and change dir accordingly or enter full path from the location of main php file.
echo '<br>'.__DIR__;
echo getcwd();
chdir(__DIR__);
here are more images to illustrate the problem. Below is for regex.h
Below is for "myconio.h"
I have handled a similar use case with a 'Bot' in AppSheet that listens for a 'Success' or 'Error' value returned from the API call and then branches on the value of that returned data to send an in-app notification to the user and a message to the app administrator that an API call has failed or to do nothing and proceed with the next step. Example automation setup below. I can post more details if that seems like something that would work in your situation.
i don't know i kniot idns wgich is you i am from austrialla and you ?
I have my controller with the tips you said but the borders are still not corrected. When I go to initialize my panel to set the background the command:
yourPane.setBackground(new Background(new BackgroundFill(Color.TRANSPARENT, CornerRadii.EMPTY, Insets.EMPTY)));
I get error in Insets.EMPTY, I tried to fix it like this but I still don't see the borders corrected.
void initialize() { confirmDialog.setBackground(new Background(new BackgroundFill(Color.TRANSPARENT, CornerRadii.EMPTY, new Insets(0)))); }
Ah, I found the answer on Reddit. Doing SUM(Visited)/Sum(Total)*100 seems to have worked
There is a bug in Sidekiq 8, see this. https://github.com/sidekiq/sidekiq/issues/6695
I had the same problem, updating tensorflow to the latest version (2.19) solved everything
For those who with older project settings without Gradle version catalogs
define your copose_compiler_version = '2.0.0'
in the project gradle file
buildscript {
ext {
compose_compiler_version = '2.0.0'
add plugins to the project gradle file
plugins {
id("org.jetbrains.kotlin.plugin.compose") version "$compose_compiler_version" // this version matches your Kotlin version
}
------------------------
add plugins to the module gradle file
plugins {
id "org.jetbrains.kotlin.plugin.compose"
}
update your module gradle file dependencies
replace old androidx.compose.compiler:compiler
to new org.jetbrains.kotlin:kotlin-compose-compiler-plugin-embeddable
dependencies {
implementation "org.jetbrains.kotlin:kotlin-compose-compiler-plugin-embeddable:$compose_compiler_version"
if you have composeOptions
in your module gradle file, also update the version
composeOptions {
kotlinCompilerExtensiionVerion compose_compiler_version
This usually means your app isn’t able to connect to the SMTP server. It might seem like a PHPMailer issue, but most of the time it’s a network problem on the server.
If your app is hosted somewhere that has support, I recommend reaching out to them. Ask them to check and make sure that port 465 is open for sending emails using SMTP with SSL.
So the problem I continuously have with subprocess.run is that it opens a subshell whereas os.system runs in my current shell. This has bitten me several times. Is there a way in subprocess to execute without actually creating the subshell?
I am very late to the party, but I wonder if this could work: https://rdrr.io/cran/spatialEco/man/sf_dissolve.html
I am not sure whether this dissolving by features can be implemented with sf functions, happy to learn from more experienced people around :)
Be careful using that ID. It's an incremental ID, not a random ID.
Is it OK to use it? Well... If you need a RANDOM id, so, don't use it.
I found the problem. I was using the wrong import. I had:
import io.ktor.http.headers
But it should be:
import io.ktor.client.request.headers
Thanks it worked for me as well
problem solved when run command on Git bash program not powershell program
keytool -exportcert -alias YOUR_ALIAS -keystore YOUR_KEYSTORE_PATH | openssl sha1 -binary | openssl base64
YOUR_ALIAS
– your keystore alias
YOUR_KEYSTORE_PATH
– the path to your .keystore
file
There's no easy answer to this issue. The only way to solve it is by implementing the custom domain into the applications and Azure AD B2C. This issue is also known by OpenID Connect: https://openid.net/specs/openid-connect-frontchannel-1_0.html#ThirdPartyContent; basically, many browsers block the cookie value from other websites.
You can check the microsoft documentation too: https://learn.microsoft.com/en-us/entra/identity-platform/reference-third-party-cookies-spas
To solve it, you need to use a custom domain. In my case, it's something I will need, so it becomes a bit convenient. My Azure AD B2C is using a new subdomain called login.mydomain.com, and my apps are at app1.mydomain.com and app2.mydomain.com. So when the iframe calls app1.mydomain.com/logout, the session is revoked as well, and every logged user/cache is cleared.
What ended up working (at least for what I need) is actually using =DATEDIF(A11, today(), "D") and then dividing that number of days by 30.4347. I got the 30.4347 by dividing 365.25 (ignoring centurial years, (365+365+365+366)/4 = 365.2425) by 12 months.
To use the package after installation, first verify it's installed by running pip list
. Then, add its path to sys.path
to import and use it normally.
If a package is installed but not automatically added to Python's path, you can manually include its directory in sys.path
.
What I'm interested in is understanding why I get that error. Is it something I'm doing wrong, I'm missing something or is it a bug in gold?
You aren't missing anything (at least nothing relevant to the linkage failure)
and aren't doing anything wrong. There is a corner-case bug in ld.gold
.
Repro
I have your progam source in test.cpp
. I haven't installed the header-only libraries
spdlog
or fmt
; I've just cloned the repos for the present purpose.
$ g++ --version | head -n1
g++ (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
$ ld.gold --version | head -n1
GNU gold (GNU Binutils for Ubuntu 2.42) 1.16
$ export CPATH=$HOME/develop/spdlog/include:$HOME/develop/fmt/include
$ g++ -c test.cpp
Link without -gc-sections
:
$ g++ test.o -fuse-ld=gold -static; echo Done
Done
And with -gc-sections
:
$ g++ test.o -fuse-ld=gold -static -Wl,-gc-sections; echo Done
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x74): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
collect2: error: ld returned 1 exit status
Done
Trying other linkers
Besides gold
, -fuse-ld
recognises three other ELF linkers. Let's try them all at
that linkage:
ld.bfd (the default GNU linker)
$ ld.bfd --version | head -n1
GNU ld (GNU Binutils for Ubuntu) 2.42
$ g++ test.o -fuse-ld=bfd -static -Wl,-gc-sections
$ ./a.out
Hello, fmt!
[2025-05-06 18:17:10.378] [info] Hello, spdlog!
ld.lld (the LLVM linker)
$ ld.lld --version | head -n1
Ubuntu LLD 18.1.6 (compatible with GNU linkers)
$ g++ test.o -fuse-ld=lld -static -Wl,-gc-sections
$ ./a.out
Hello, fmt!
[2025-05-06 18:18:21.994] [info] Hello, spdlog!
ld.mold (the Modern linker)
$ ld.mold --version | head -n1
mold 2.30.0 (compatible with GNU ld)
$ g++ test.o -fuse-ld=mold -static -Wl,-gc-sections
$ ./a.out
Hello, fmt!
[2025-05-06 18:22:47.597] [info] Hello, spdlog!
So gold
is the only one that can't link this program.
What is gold
doing wrong?
The first diagnostic:
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x14): \
error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
is reporting that the relocation target at offset 0x14 in section .note.stapsdt
of object file
libc.a(pthread_create.o)
refers to the local symbol .text
, which is symbol #1 in that object
file, and that this relocation can't be carried out because the section in which that symbol is
defined has been discarded.
The second diagnostic is the just same, except that the relocation target this time is at offset 0x74, so we'll just pursue the first diagnostic.
Let's check that it's true.
First get that object file:
$ $ ar -x $(realpath /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a) pthread_create.o
Check out the relocations for its note.stapsdt
section:
$ readelf -rW pthread_create.o
...[cut]...
Relocation section '.rela.note.stapsdt' at offset 0x46f8 contains 4 entries:
Offset Info Type Symbol's Value Symbol's Name + Addend
0000000000000014 0000000100000001 R_X86_64_64 0000000000000000 .text + 40e
000000000000001c 0000003000000001 R_X86_64_64 0000000000000000 _.stapsdt.base + 0
0000000000000074 0000000100000001 R_X86_64_64 0000000000000000 .text + c7b
000000000000007c 0000003000000001 R_X86_64_64 0000000000000000 _.stapsdt.base + 0
...[cut]...
Yes, it has relocation targets at offset 0x14 and 0x74. The first one is to be patched using the address
of symbol # 1 ( = Info >> 32) in the symbol table (which we're told is .text
) + 0x40e. Symbol #1 in pthread_create.o
is
$ readelf -sW pthread_create.o | grep ' 1:'
1: 0000000000000000 0 SECTION LOCAL DEFAULT 2 .text
indeed local symbol .text
, (a section name) and it is defined in section #2 of the file
which of course is:
$ readelf -SW pthread_create.o | grep ' 2]'
[ 2] .text PROGBITS 0000000000000000 000050 001750 00 AX 0 0 16
the .text
section.
So the diagnostic reports that gold
has binned the .text
section of pthread_create.o
. Let's
ask gold to tell us what sections of pthread_create.o
it discarded.
$ g++ test.o -fuse-ld=gold -static -Wl,-gc-sections,-print-gc-sections 2>&1 | grep pthread_create.o
/usr/bin/ld.gold: removing unused section from '.text' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.data' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.bss' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata.str1.1' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata.str1.8' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.text.unlikely' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata.str1.16' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata.cst4' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.gold: removing unused section from '.rodata.cst8' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x74): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
It discarded 10 of the:
$ readelf -SW pthread_create.o | head -n1
There are 23 section headers, starting at offset 0x48c0:
23 sections in the file, including .text
, as compared with:
$ g++ test.o -fuse-ld=bfd -static -Wl,-gc-sections,-print-gc-sections 2>&1 | grep pthread_create.o
/usr/bin/ld.bfd: removing unused section '.group' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.bfd: removing unused section '.stapsdt.base[.stapsdt.base]' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.bfd: removing unused section '.rodata.cst4' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
/usr/bin/ld.bfd: removing unused section '.rodata' in file '/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)'
the 4 sections discarded by ld.bfd
, excluding .text
. gold
also retains 2 sections (.group
,stapsdt.base
) that bfd
discards, but the outcome
says that gold
has chucked out a baby with the bathwater.
The linkage error is a (all but) a false alarm
The retention of section .note.stapsdt
from pthread_create.o
sets it off. This
section is retained because any output .note.*
section will be a GC-root section for
any linker: .note
sections are conventionally reserved for special information to
be consumed by other programs, and as such are unconditionaally retained in the same
way as ones defining external symbols. note.stapsdt
sections in particular are emitted to expose
patch points for the runtime insertion of Systemtap
instrumentation hooks.
Presumably, you don't care if this program has Systemtap support. You've just got
it because it's compiled into pthread_create.o
(and elsewhere in GLIBC). The
enabling .note.stapsdt
section it a GC-root section in pthread_create.o
that
references its .text
section. But your program has no functional need for that
.text
section. We can observe this by just blowing through the linkage failure
with:
$ rm a.out
$ g++ test.o -fuse-ld=gold -static -Wl,-gc-sections,--noinhibit-exec
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x74): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
--noinhibit-exec
tells the linker to output a viable image if it can make one, notwithstanding
errors. And in this case:
$ ./a.out
Hello, fmt!
[2025-05-07 10:51:57.987] [info] Hello, spdlog!
The .text
section of pthread_create.o
is garbage-collected; the linkage errors,
but the program is perfectly fine.
So we'd expect a clean linkage if we yank .note.stapsdt
out of pthread_create.o
and interpose the modified object file in the link, and so we do:
$ objcopy --remove-section='.note.stapsdt' pthread_create.o pthread_create_nostap.o
$ g++ test.o pthread_create_nostap.o -fuse-ld=gold -static -Wl,-gc-sections
$ ./a.out
Hello, fmt!
[2025-05-07 11:08:27.647] [info] Hello, spdlog!
The program is fine without the .note.stapsdt
and/or the .text
section of
pthread_create.o
, but Systemtap would not be fine with the program. That's
the cash value of the linkage failure.
The linkage error has nothing to do with your particular program.
Check out this deranged linkage:
$ cat main.c
int main(void)
{
return 0;
}
$ gcc main.c -static -Wl,-gc-sections,--whole-archive,-lc,--no-whole-archive
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(dso_handle.o):(.data.rel.ro.local+0x0): multiple definition of `__dso_handle'; /usr/lib/gcc/x86_64-linux-gnu/13/crtbeginT.o:(.data+0x0): first defined here
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(rcmd.o): in function `__validuser2_sa':
(.text+0x5e8): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(rcmd.o): note: the message above does not take linker garbage collection into account
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(dl-reloc-static-pie.o): in function `_dl_relocate_static_pie':
(.text+0x0): multiple definition of `_dl_relocate_static_pie'; /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/crt1.o:(.text+0x30): first defined here
collect2: error: ld returned 1 exit status
where I'm trying and failing to make a garbage-collected static linkage of the whole of GLIBC into a do-nothing program, with the default linker.
Now let's repeat the failure with gold
:
$ gcc main.c -fuse-ld=gold -static -Wl,-gc-sections,--whole-archive,-lc,--no-whole-archive
/usr/bin/ld.gold: error: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(dso_handle.o): multiple definition of '__dso_handle'
/usr/bin/ld.gold: /usr/lib/gcc/x86_64-linux-gnu/13/crtbeginT.o: previous definition here
/usr/bin/ld.gold: error: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(dl-reloc-static-pie.o): multiple definition of '_dl_relocate_static_pie'
/usr/bin/ld.gold: /usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/crt1.o: previous definition here
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_cond_destroy.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_cond_init.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_create.o)(.note.stapsdt+0x74): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_join_common.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_join_common.o)(.note.stapsdt+0x5c): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_destroy.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_init.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_timedlock.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_timedlock.o)(.note.stapsdt+0x68): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_timedlock.o)(.note.stapsdt+0xbc): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_mutex_timedlock.o)(.note.stapsdt+0x11c): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(pthread_rwlock_destroy.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(____longjmp_chk.o)(.note.stapsdt+0x14): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
/usr/lib/gcc/x86_64-linux-gnu/13/../../../x86_64-linux-gnu/libc.a(____longjmp_chk.o)(.note.stapsdt+0x64): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
collect2: error: ld returned 1 exit status
Now we're sprayed with:
libc.a(???.o)(.note.stapsdt+???): error: relocation refers to local symbol ".text" [1], which is defined in a discarded section
errors that weren't there before, the great majority of the ???.o
being pthread_???.o
.
How does gold
come to disregard the note.stapsdt
references into .text
in pthread_create.o
?
To understand that I had to get the binutils-gdb
source code,
study the gold
source and debug a build of it on the problem linkage with ad-hoc diagnostics added. Here is the gist.
gold
's GC algorithm initally reserves a set of GC-root sections in the pre-GC linkage to be retained
unconditionally. These include the section that contains the _start
symbol (or
other non-default program entry symbol), plus all sections that match a hard-coded
set of prefixes or names, including all .note.*
sections. So pthread_create.o(.note.stapsdt)
is one of them.
For each section src_object.o(.src_sec)
of each object file linked - provided it is
a type-ALLOC
section - GC maps that section to list of the relocations ( = references) from src_object.o(.src_sec)
into any other input section dest_object.o(.dest_sec)
, so that if src_object.o(.src_sec)
is
retained then dest_object.o(.dest_sec)
will also be retained. An ALLOC
section here
means one that will occupy space in the process image, as indicated by
by flag SHF_ALLOC
set in the section header. This property can be taken to mean that the section
would be worth garbage collecting. The algorithm
discovers the relocations by reading the corresponding relocations section src_object(.rel[a].src_sec
).
Then, starting with the GC-root sections, the algorithm recursively determines for each retained section what other sections its refers to, as per its associated relocations, and adds the sections referred to to the retained list. Finally, all sections not retained are discarded.
This is all as should be, except for the winnowing out of sections that are
not type ALLOC
from relocations gathering. That is a flaw, because a .note.*
section, depending
on its kind, might be type ALLOC
(e.g. .note.gnu.property
, .note.ABI-tag
in this linkage) or it might not
(e.g. .note.gnu.gold-version
, note.stapsdt
in this linkage), and being non-ALLOC
does not
preclude it having relocations into ALLOC
sections. The bug will sleep soundly
as long as a non-ALLOC
.note.*
section that is winnowed out of
GC relocations processing does not contain relocations.
Section pthread_create.o(.note.stapsdt)
is non-ALLOC
:
$ readelf -SW pthread_create.o | egrep '(.note.stapsdt|Section|Flg)'
Section Headers:
[Nr] Name Type Address Off Size ES Flg Lk Inf Al
[ 8] .note.stapsdt NOTE 0000000000000000 001928 0000c8 00 0 0 4
[ 9] .rela.note.stapsdt RELA 0000000000000000 0046f8 000060 18 I 20 8 8
(Flg
A
not set), but does have relocations. So the
bug bites. The GC algorithm never sees the associated
relocations in rela.note.stapsdt
that refer to pthread_create.o(.text)
. When it finds that
pthread_create.o(.note.stapsdt)
is non-ALLOC
it just skips over .pthread_create.o(.rela.note.stapsdt)
without further ado.
Thus GC never records that pthread_create.o(.note.stapsdt)
- retained -
refers to pthread_create.o(.text)
, and since nothing else refers to pthread_create.o(.text)
,
it is discarded. When time comes to apply relocations to pthread_create.o(.note.stapsdt)
,
the section they refer to is no longer in the linkage.
A comment in file binutils-gdb/gold/reloc.cc
explaining the flawed
winnowing test:
// We are scanning relocations in order to fill out the GOT and
// PLT sections. Relocations for sections which are not
// allocated (typically debugging sections) should not add new
// GOT and PLT entries. So we skip them unless this is a
// relocatable link or we need to emit relocations. FIXME: What
// should we do if a linker script maps a section with SHF_ALLOC
// clear to a section with SHF_ALLOC set?
illuminates how .note.stapsdt
sections fall through the cracks. It is
unclear to me why this a priori logic should be allowed to prevail over
contrary evidence that a non-ALLOC
.somesec
section does have relocations as
provided by the existence of a .rel[a].somesec
section. If such
non-ALLOC
sections were acknowledged they would need to be
deferred for special "inverted" GC-handling: Instead of taking their retention
to entail the retention of any sections that they transitively refer to,
GC would need to determine what other sections are to be discarded without reference to the non-ALLOC
ones
and then also discard all the non-ALLOC
ones that refer only to already discarded sections.
The open FIXME
is pointed in our context because it foresees the a priori logic
coming unstuck, but not in quite the way that we observe.
Is there a gold
workaround?
That code comment kindles hope that we might dodge the bug if we were either to:-
-r|--relocatable
), static, garbage-collected preliminary
linkage of test.o
, then statially link the resulting object file
into a program, requesting -nostartfiles
to avoid linking the startup code twice.or:-
-q|--emit relocs
, even though we don't want to emit relocations.But gold
will not play with either of these desperate ruses. The first one:
$ g++ -o bigobj.o test.o -fuse-ld=gold -static -Wl,-r,--entry=_start,-gc-sections
/usr/bin/ld.gold: error: cannot mix -r with --gc-sections or --icf
/usr/bin/ld.gold: internal error in do_layout, at ../../gold/object.cc:1939
collect2: error: ld returned 1 exit status
And the second one:
$ g++ test.o -fuse-ld=gold -static -Wl,-q,-gc-sections
/usr/bin/ld.gold: internal error in do_layout, at ../../gold/object.cc:1939
collect2: error: ld returned 1 exit status
Both of them work with ld.bfd
, where they're not needed (They also work with mold
, and
both fail with lld
). AFAICS the only remedies that work for gold
are the ones we've already seen: either link with --noinhibit-exec
,
or else use objcopy
to make santitised copies of the problem object files from
which redundant note.stapsdt
sections are deleted. At a stretch these might be called workarounds,
but hardly gold workarounds. Obviously a reasonable person would give up
on gold
and use one of the other linkers that just works (as indeed you are resigned to do).
Reporting the bug will likely be thankless because gold
is moribund, as @Eljay commented.
Something you are maybe missing (though not relevantly to the linkage failure).
The linkage option -gc-sections
is routinely used in conjunction with the
compiler options -ffunction-sections
and fdata-sections
. These respectively direct the
compiler to emit each function definition or data object definition in a section by
itself, and that empowers GC to work unhandicapped by facing unreferenced
definitions that it cannot discard because they are located in sections that also contain
referenced definitions.
In object code from which template instantiations are altogether absent
or not prevalent, omitting -ffunction-sections
, fdata-sections
at compilation
will normally render the pay-off of -gc-sections
considerably sub-optimal. If
template instantiations are prevalent the
handicap is mitigated pro rata to their prevalence by the fact that the C++ compiler for
technical reasons places
template instantiations in their own sections anyway. The handicap is further mitigated by
optimisation level, so for a C++ program made almost entirely of template instantiations such
as yours, with -O3
optimisation, -ffunction-sections
, fdata-sections
at compilation
may have little to no benefit on GC. But as a rule they will produce a GC benefit and the
only effect they can have is for the better.
updated version
String.format("%05d", num)
I don't know your file tree so I can't know if my answer is correct or not.
Try instead of '/assets/images/header/background.webp'
to use './assets/images/header/background.webp'
.
I think the web may be redirecting you, which generates the error. if you use the -L flag “follow redirects” it should work.
#!/bin/bash
echo "Downloading Visual Studio Code..."
curl -L -o VScode.zip https://code.visualstudio.com/sha/download?build=stable&os=darwin-universal
To fix this problem please set in Windows PowerShell a environment variable PYTHON_BASIC_REPL to any value for example:
$env:PYTHON_BASIC_REPL = 'True'
and then call python.exe
Than you can reach all characters entered with AltGr
I had the similar problem and i am wondering. There should also be a bugfix included in the new 3.13.4 relase. This is for all those who encountered the problem and happened to come across this page via google.
Many regards
It may caused by wsgi.py and server settings and make sure you added the app name in settings.py in INSTALLED_APPS.
My prefer for django projects is render.com . You can try it for free
EDIT: i figured out the issue. The problem is that new_line really needs to point to a new line (green line in PR view). If its not green i have to supply both new_line and old_line. If its a red line i have to supply old_line
Thank you!
I tried using a token instead of user/pass, and it's been working longer, but I need to keep an eye on it. However, this does not explain why my other services are working; only this one is disconnecting and is not showing any errors.
First, I created a token with
influx auth create \
--org <ORG_NAME> \
--read-buckets \
--write-buckets \
--description "Token for My App"
and then
InfluxDBClient(url="http://localhost:8086", token="my-token", org="my-org")
nvm use 16.13
Make sure to use in every terminal related to running the project
Delete node_modules
files
Restart metro and run again
yarn start --reset-cache
yarn android
I took me a while to realize this, but in my case I actually had to head to Output and then select "Python Test Log":
greateI'm used to building data viz in Redash or Grafana, which both have the workflow "New Dash" --> "New Chart" --> [write some SQL, choose visualization options for the output] --> done. For a new work project, I have to build a dash in Looker Studio instead.